Live
Black Hat USADark ReadingBlack Hat AsiaAI BusinessStartup funding shatters all records in Q1TechCrunch AIHow to Use Shaders in React (2026 WebGPU / WebGL Tutorial)DEV CommunityThe 5th Agent Orchestration Pattern: Market-Based Task AllocationDEV CommunityThe Hidden Cost of Copy-Pasting Code Into ChatGPTDEV Community14-Package Monorepo: How We Structured WAIaaS for AI Agent BuildersDEV CommunityPromoting raw BG3 gameplay bundle previews in the TD2 SDL portDEV CommunityWhat Is New In Helm 4 And How It Improves Over Helm 3DEV CommunityDevelopers Are Designing for AI Before Users NowDEV CommunityStop Using Elaborate Personas: Research Shows They Degrade Claude Code OutputDEV CommunityAnthropic Executive Blames Claude Code Leak on ‘Process Errors’Bloomberg TechnologyAn Engineering-grade breakdown of RAG PipelineDEV CommunityHate Speech Detection Still Cooks (Even in 2026)Towards AIBlack Hat USADark ReadingBlack Hat AsiaAI BusinessStartup funding shatters all records in Q1TechCrunch AIHow to Use Shaders in React (2026 WebGPU / WebGL Tutorial)DEV CommunityThe 5th Agent Orchestration Pattern: Market-Based Task AllocationDEV CommunityThe Hidden Cost of Copy-Pasting Code Into ChatGPTDEV Community14-Package Monorepo: How We Structured WAIaaS for AI Agent BuildersDEV CommunityPromoting raw BG3 gameplay bundle previews in the TD2 SDL portDEV CommunityWhat Is New In Helm 4 And How It Improves Over Helm 3DEV CommunityDevelopers Are Designing for AI Before Users NowDEV CommunityStop Using Elaborate Personas: Research Shows They Degrade Claude Code OutputDEV CommunityAnthropic Executive Blames Claude Code Leak on ‘Process Errors’Bloomberg TechnologyAn Engineering-grade breakdown of RAG PipelineDEV CommunityHate Speech Detection Still Cooks (Even in 2026)Towards AI

Nonnegative Matrix Factorization in the Component-Wise L1 Norm for Sparse Data

arXiv stat.MLby Giovanni Seraghiti, K\'evin Dubrulle, Arnaud Vandaele, Nicolas GillisApril 1, 20262 min read0 views
Source Quiz

arXiv:2603.29715v1 Announce Type: cross Abstract: Nonnegative matrix factorization (NMF) approximates a nonnegative matrix, $X$, by the product of two nonnegative factors, $WH$, where $W$ has $r$ columns and $H$ has $r$ rows. In this paper, we consider NMF using the component-wise L1 norm as the error measure (L1-NMF), which is suited for data corrupted by heavy-tailed noise, such as Laplace noise or salt and pepper noise, or in the presence of outliers. Our first contribution is an NP-hardness proof for L1-NMF, even when $r=1$, in contrast to the standard NMF that uses least squares. Our second contribution is to show that L1-NMF strongly enforces sparsity in the factors for sparse input matrices, thereby favoring interpretability. However, if the data is affected by false zeros, too spar

View PDF

Abstract:Nonnegative matrix factorization (NMF) approximates a nonnegative matrix, $X$, by the product of two nonnegative factors, $WH$, where $W$ has $r$ columns and $H$ has $r$ rows. In this paper, we consider NMF using the component-wise L1 norm as the error measure (L1-NMF), which is suited for data corrupted by heavy-tailed noise, such as Laplace noise or salt and pepper noise, or in the presence of outliers. Our first contribution is an NP-hardness proof for L1-NMF, even when $r=1$, in contrast to the standard NMF that uses least squares. Our second contribution is to show that L1-NMF strongly enforces sparsity in the factors for sparse input matrices, thereby favoring interpretability. However, if the data is affected by false zeros, too sparse solutions might degrade the model. Our third contribution is a new, more general, L1-NMF model for sparse data, dubbed weighted L1-NMF (wL1-NMF), where the sparsity of the factorization is controlled by adding a penalization parameter to the entries of $WH$ associated with zeros in the data. The fourth contribution is a new coordinate descent (CD) approach for wL1-NMF, denoted as sparse CD (sCD), where each subproblem is solved by a weighted median algorithm. To the best of our knowledge, sCD is the first algorithm for L1-NMF whose complexity scales with the number of nonzero entries in the data, making it efficient in handling large-scale, sparse data. We perform extensive numerical experiments on synthetic and real-world data to show the effectiveness of our new proposed model (wL1-NMF) and algorithm (sCD).

Comments: 21 pages before supplementary, code available from this https URL

Subjects:

Machine Learning (cs.LG); Signal Processing (eess.SP); Optimization and Control (math.OC); Machine Learning (stat.ML)

Cite as: arXiv:2603.29715 [cs.LG]

(or arXiv:2603.29715v1 [cs.LG] for this version)

https://doi.org/10.48550/arXiv.2603.29715

arXiv-issued DOI via DataCite (pending registration)

Submission history

From: Nicolas Gillis [view email] [v1] Tue, 31 Mar 2026 13:16:02 UTC (305 KB)

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by AI News Hub · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

More about

modelannounceproduct

Knowledge Map

Knowledge Map
TopicsEntitiesSource
Nonnegative…modelannounceproductinterpretab…componentpaperarXiv stat.…

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 190 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!

More in Products