Live
Black Hat USAAI BusinessBlack Hat AsiaAI BusinessWhat is a Function? Simple Explanation with ExamplesDEV CommunityWe Shipped a RAG Chatbot to 500 Enterprise Tenants. Here's What Actually Broke First.Dev.to AII stopped managing translations manually (and built this instead)DEV CommunityHumans and the retain of control in a world where AI thinks and decides alongside usDev.to AIInsights from GDPS 2026: Enterprise Agents, AI Native, and One-Person CompaniesDev.to AIEstimates on the generalization error of Physics Informed Neural Networks(PINNs) for approximating PDEsDev.to AIBenyar Men's Watches SA and Lige Men's Watches South Africa: Bold Styles for SA MenDev.to AI5 Signs You're Ready to Build Your SaaS (And 3 Signs You're Not)Dev.to AIWhat Is Base64 Encoding and Why Do Developers Use It EverywhereDEV CommunityAnthropic Just Paid $400M for a Team of 10. Here's Why That Makes Sense.DEV CommunitySame Model, Different Environment, Different ResultsDEV CommunityAm I the baddie?lesswrong.comBlack Hat USAAI BusinessBlack Hat AsiaAI BusinessWhat is a Function? Simple Explanation with ExamplesDEV CommunityWe Shipped a RAG Chatbot to 500 Enterprise Tenants. Here's What Actually Broke First.Dev.to AII stopped managing translations manually (and built this instead)DEV CommunityHumans and the retain of control in a world where AI thinks and decides alongside usDev.to AIInsights from GDPS 2026: Enterprise Agents, AI Native, and One-Person CompaniesDev.to AIEstimates on the generalization error of Physics Informed Neural Networks(PINNs) for approximating PDEsDev.to AIBenyar Men's Watches SA and Lige Men's Watches South Africa: Bold Styles for SA MenDev.to AI5 Signs You're Ready to Build Your SaaS (And 3 Signs You're Not)Dev.to AIWhat Is Base64 Encoding and Why Do Developers Use It EverywhereDEV CommunityAnthropic Just Paid $400M for a Team of 10. Here's Why That Makes Sense.DEV CommunitySame Model, Different Environment, Different ResultsDEV CommunityAm I the baddie?lesswrong.com
AI NEWS HUBbyEIGENVECTOREigenvector

Swift-SVD: Theoretical Optimality Meets Practical Efficiency in Low-Rank LLM Compression

arXiv cs.CLby Ruoling Qi, Yirui Liu, Xuaner Wu, Xiangyu Wang, Ming Li, Chen Chen, Jian Chen, Yin Chen, Qizhen WengApril 4, 20261 min read0 views
Source Quiz

arXiv:2604.01609v1 Announce Type: new Abstract: The deployment of Large Language Models is constrained by the memory and bandwidth demands of static weights and dynamic Key-Value cache. SVD-based compression provides a hardware-friendly solution to reduce these costs. However, existing methods suffer from two key limitations: some are suboptimal in reconstruction error, while others are theoretically optimal but practically inefficient. In this paper, we propose Swift-SVD, an activation-aware, closed-form compression framework that simultaneously guarantees theoretical optimum, practical efficiency and numerical stability. Swift-SVD incrementally aggregates covariance of output activations given a batch of inputs and performs a single eigenvalue decomposition after aggregation, enabling tr

View PDF HTML (experimental)

Abstract:The deployment of Large Language Models is constrained by the memory and bandwidth demands of static weights and dynamic Key-Value cache. SVD-based compression provides a hardware-friendly solution to reduce these costs. However, existing methods suffer from two key limitations: some are suboptimal in reconstruction error, while others are theoretically optimal but practically inefficient. In this paper, we propose Swift-SVD, an activation-aware, closed-form compression framework that simultaneously guarantees theoretical optimum, practical efficiency and numerical stability. Swift-SVD incrementally aggregates covariance of output activations given a batch of inputs and performs a single eigenvalue decomposition after aggregation, enabling training-free, fast, and optimal layer-wise low-rank approximation. We employ effective rank to analyze local layer-wise compressibility and design a dynamic rank allocation strategy that jointly accounts for local reconstruction loss and end-to-end layer importance. Extensive experiments across six LLMs and eight datasets demonstrate that Swift-SVD outperforms state-of-the-art baselines, achieving optimal compression accuracy while delivering 3-70X speedups in end-to-end compression time. Our code will be released upon acceptance.

Comments: Under Review

Subjects:

Computation and Language (cs.CL)

Cite as: arXiv:2604.01609 [cs.CL]

(or arXiv:2604.01609v1 [cs.CL] for this version)

https://doi.org/10.48550/arXiv.2604.01609

arXiv-issued DOI via DataCite (pending registration)

Submission history

From: Jian Chen [view email] [v1] Thu, 2 Apr 2026 04:40:50 UTC (613 KB)

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by Eigenvector · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

More about

modellanguage modeltraining

Knowledge Map

Knowledge Map
TopicsEntitiesSource
Swift-SVD: …modellanguage mo…trainingreleaseannouncepaperarXiv cs.CL

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 206 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!