Live
Black Hat USADark ReadingBlack Hat AsiaAI BusinessAI on CanvasHacker News AI TopDFRobot Showcases AI Maker Projects at Robot Hokoten in Akihabara - Thailand Business NewsGoogle News - AI ThailandAGI vs artificial intelligence: What’s the real difference - WIONGNews AI AGIAI agents promise to 'run the business,' but who is liable if things go wrong?Hacker News AI TopBuy Facebook Reviews | Boost Brand Trust & VisibilityDev.to AIMy AI Pendant Turned Voice Memos Into Two Shipped ProjectsMedium AIWhy Your Website Is Invisible to AI Search Engines (And How to Fix It)Dev.to AI85% of Companies Claim Skills-Based Hiring. Only 0.14% of Hires Are Actually Affected.Medium AII Tried the Tea Checker App as a Developer — Here’s My Honest ReviewDev.to AIBeyond Simple OCR: Building an Autonomous VLM Auditor for E-Commerce ScaleDev.to AIHow to Build the 1% AI System — A Step-by-Step Implementation That Teams Actually UseMedium AIScheduling & Priority: Teaching Agents What Matters NowMedium AIBlack Hat USADark ReadingBlack Hat AsiaAI BusinessAI on CanvasHacker News AI TopDFRobot Showcases AI Maker Projects at Robot Hokoten in Akihabara - Thailand Business NewsGoogle News - AI ThailandAGI vs artificial intelligence: What’s the real difference - WIONGNews AI AGIAI agents promise to 'run the business,' but who is liable if things go wrong?Hacker News AI TopBuy Facebook Reviews | Boost Brand Trust & VisibilityDev.to AIMy AI Pendant Turned Voice Memos Into Two Shipped ProjectsMedium AIWhy Your Website Is Invisible to AI Search Engines (And How to Fix It)Dev.to AI85% of Companies Claim Skills-Based Hiring. Only 0.14% of Hires Are Actually Affected.Medium AII Tried the Tea Checker App as a Developer — Here’s My Honest ReviewDev.to AIBeyond Simple OCR: Building an Autonomous VLM Auditor for E-Commerce ScaleDev.to AIHow to Build the 1% AI System — A Step-by-Step Implementation That Teams Actually UseMedium AIScheduling & Priority: Teaching Agents What Matters NowMedium AI
AI NEWS HUBbyEIGENVECTOREigenvector

Gemma 4 is a KV_cache Pig

Reddit r/LocalLLaMAby /u/IngeniousIdiocy https://www.reddit.com/user/IngeniousIdiocyApril 3, 20261 min read1 views
Source Quiz

Ignoring the 8 bit size of Nvidia’s marketed 4 bit quantization of the dense model… The dense model KV cache architecture uses 3x or more the memory than what I have seen with other models. It seems like the big choice was 256 head dim instead of 128. I am looking at 490KB per 8 bit token of KV cache versus 128KB on Qwen3. I am running the nvidia weights at 4 bit on an rtx pro 6000 with 96GB of RAM and 8 bit kv cache and still only have room for 115k tokens. I was surprised is all. The model scales well in vllm and seems quite smart. submitted by /u/IngeniousIdiocy [link] [comments]

Could not retrieve the full article text.

Read on Reddit r/LocalLLaMA →
Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by Eigenvector · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

Knowledge Map

Knowledge Map
TopicsEntitiesSource
Gemma 4 is …modelmarketquantizationReddit r/Lo…

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 164 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!

More in Models