Average-Case Reductions for $k$-XOR and Tensor PCA
arXiv:2601.19016v2 Announce Type: replace-cross Abstract: We study the computational properties of two canonical planted average-case problems -- noisy planted $k$-XOR and Tensor PCA -- by formally unifying them into a family of planted problems parametrized by tensor order $k$, number of entries $m$, and noise level $\delta$. We build a wide range of poly-time average-case reductions within this family, across all regimes $m \in [1, n^k]$. In the denser $m \geq n^{k/2}$ regime, our reductions preserve proximity to the computational threshold, and, as a central application, reduce conjectured-hard $k$-XOR instances with $m \approx n^{k/2}$ to conjectured-hard instances of Tensor PCA. Additionally, we give new order-reducing maps at fixed densities (e.g., $5\to 4$ for $k$-XOR with $m \appro
View PDF HTML (experimental)
Abstract:We study the computational properties of two canonical planted average-case problems -- noisy planted $k$-XOR and Tensor PCA -- by formally unifying them into a family of planted problems parametrized by tensor order $k$, number of entries $m$, and noise level $\delta$. We build a wide range of poly-time average-case reductions within this family, across all regimes $m \in [1, n^k]$. In the denser $m \geq n^{k/2}$ regime, our reductions preserve proximity to the computational threshold, and, as a central application, reduce conjectured-hard $k$-XOR instances with $m \approx n^{k/2}$ to conjectured-hard instances of Tensor PCA. Additionally, we give new order-reducing maps at fixed densities (e.g., $5\to 4$ for $k$-XOR with $m \approx n^{k/2}$ entries and $7\to 4$ for Tensor PCA). In the sparser $m \leq n^{k/2}$ regime, we relate instances of different orders, reducing, for example, $7$-XOR with $m = n^{3.4}$ to the classical setting of $3$-XOR with $m = \widetilde\Theta(n^{1.4})$. Taken together, these results establish a hardness partial order in the space of planted tensor models.
Comments: 112 pages, 6 figures
Subjects:
Computational Complexity (cs.CC); Cryptography and Security (cs.CR); Probability (math.PR); Statistics Theory (math.ST)
Cite as: arXiv:2601.19016 [cs.CC]
(or arXiv:2601.19016v2 [cs.CC] for this version)
https://doi.org/10.48550/arXiv.2601.19016
arXiv-issued DOI via DataCite
Submission history
From: Alina Harbuzova [view email] [v1] Mon, 26 Jan 2026 23:05:54 UTC (175 KB) [v2] Thu, 2 Apr 2026 16:45:31 UTC (179 KB)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modelannounceapplication
Long Term AI Memory by creator of Apache Cassandra
cortexdb.ai CortexDB is the long-term memory layer for AI systems — The problem is fundamental: today's AI agents are stateless. Every conversation starts from zero. The dominant approach to giving AI memory — having an LLM rewrite and merge your data on every single write — is lossy, fragile, and ruinously expensive. The LLM decides what to keep and what to throw away, replaces the original with a summary, and that decision is irreversible. Information it deemed unimportant today may be exactly what a future query needs tomorrow. CortexDB takes a fundamentally different approach: every piece of information is appended to an immutable event log and never overwritten. A lightweight LLM extracts entities and relationships asynchronously, but the original data is always preserved — if the ext

Prologue: After We No Longer Write Code by Hand, What Remains for Engineers?
1. A Question We Can No Longer Avoid See Figures 0-1 and 0-2 in this chapter. Over the past decade, software engineers have had a broadly stable understanding of themselves. We proved our value by writing implementations, reading systems, fixing bugs, refactoring, and aligning team collaboration. Even as job specialization became more detailed, that central image did not change: an engineer was, first of all, someone who personally built complex things. But once agents began to enter real development workflows, that image was quietly unsettled. Code implementation, test scaffolding, documentation patches, simple regressions, fault reproduction, and localized fixes—more and more steps that once depended on human hands began to be handed over to models. The change is uneven and far from comp

Agents Can Pay. That's Not the Problem.
On April 2, 2026, the x402 Foundation launched under the Linux Foundation. The founding members included Visa, Mastercard, American Express, Stripe, Coinbase, Cloudflare, Google, Microsoft, AWS, Adyen, Fiserv, Shopify, and a dozen others. Twenty-three organizations representing essentially the entire payments industry signed up on day one. The announcement celebrated something real: the agent payment problem is, for practical purposes, solved. Any AI agent on the planet can now send a payment to any resource that accepts x402. The plumbing is done. This is worth sitting with, because it changes the nature of the problem. If the question was "can agents pay?" — x402 answers it. If the question was "will the payment networks support this?" — 23 members of the Linux Foundation answer it. If t
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products

AI Knows Your Project Budget Will Fail Before You Do
Most project budgets do not fail all at once. They drift. A small variance here. A missed assumption there. Then suddenly, the numbers no longer reflect reality and no one can clearly explain when things went wrong. If you have worked on a tech project, you have seen this pattern. It is not usually the result of poor planning. It is the result of using static forecasting in an environment that never stops changing. The Limits of Spreadsheets Traditional budget forecasting depends on periodic updates. Teams check in weekly or monthly and adjust based on what has already happened. The problem is that modern projects do not move in neat cycles. Costs shift in real time. Scope evolves during execution. Dependencies change without warning. By the time a spreadsheet catches up, it is already beh





Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!