On-the-fly Repulsion in the Contextual Space for Rich Diversity in Diffusion Transformers
Hey there, little explorer! 🚀
Imagine you have a magic drawing robot! You tell it, "Draw me a red car!" 🚗
Sometimes, this robot draws almost the exact same red car every time. It's good, but a bit boring, right?
Scientists taught the robot a new trick! They told it, "When you're drawing, try to push away from ideas that are too similar!" Like when you're playing with LEGOs and you want to build something different each time.
Now, when you say "Draw a red car!", the robot thinks, "Hmm, how can I make this car look super different from the last one?" Maybe it draws a race car, then a truck, then a tiny car! 🏎️🚛🚕
This makes the robot's drawings much more fun and surprising! And it still draws really good cars, just lots of different kinds! Yay for variety! 🎉
Diffusion transformers can generate diverse visual outputs by applying repulsion in contextual space during the forward pass, maintaining visual quality and semantic accuracy while operating efficiently in streamlined models. (4 upvotes on HuggingFace)
Published on Mar 30
Authors:
,
,
Abstract
Diffusion transformers can generate diverse visual outputs by applying repulsion in contextual space during the forward pass, maintaining visual quality and semantic accuracy while operating efficiently in streamlined models.
AI-generated summary
Modern Text-to-Image (T2I) diffusion models have achieved remarkable semantic alignment, yet they often suffer from a significant lack of variety, converging on a narrow set of visual solutions for any given prompt. This typicality bias presents a challenge for creative applications that require a wide range of generative outcomes. We identify a fundamental trade-off in current approaches to diversity: modifying model inputs requires costly optimization to incorporate feedback from the generative path. In contrast, acting on spatially-committed intermediate latents tends to disrupt the forming visual structure, leading to artifacts. In this work, we propose to apply repulsion in the Contextual Space as a novel framework for achieving rich diversity in Diffusion Transformers. By intervening in the multimodal attention channels, we apply on-the-fly repulsion during the transformer's forward pass, injecting the intervention between blocks where text conditioning is enriched with emergent image structure. This allows for redirecting the guidance trajectory after it is structurally informed but before the composition is fixed. Our results demonstrate that repulsion in the Contextual Space produces significantly richer diversity without sacrificing visual fidelity or semantic adherence. Furthermore, our method is uniquely efficient, imposing a small computational overhead while remaining effective even in modern "Turbo" and distilled models where traditional trajectory-based interventions typically fail.
View arXiv page View PDF Project page Add to collection
Get this paper in your agent:
hf papers read 2603.28762
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash
Models citing this paper 0
No model linking this paper
Cite arxiv.org/abs/2603.28762 in a model README.md to link it from this page.
Datasets citing this paper 0
No dataset linking this paper
Cite arxiv.org/abs/2603.28762 in a dataset README.md to link it from this page.
Spaces citing this paper 0
No Space linking this paper
Cite arxiv.org/abs/2603.28762 in a Space README.md to link it from this page.
Collections including this paper 0
No Collection including this paper
Add this paper to a collection to link it from this page.
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
researchpaperarxiv
Looking for arXiv endorsement (cs.LG) – RL fine-tuning for VLMs (GRPO, MathVista)
Hi everyone, I am seeking an arXiv endorsement for cs.LG (Machine Learning) to submit my first paper on RL fine-tuning for vision-language models. Background: MS in AI (Purdue), working on RL + VLM training systems. Paper: A Case Study of Staged Metric-Gated GRPO for Visual Numeric Reasoning PDF: https://github.com/kgaero/RL_GSPO_Qwen2.5VLM/blob/main/paper/staged_metric_gated_grpo.pdf Short summary: Staged RL fine-tuning pipeline for VLMs (GRPO-based) Curriculum over MathVista subsets Metric-gated reward adaptation (structure → correctness) Checkpoint-aware continuation via alias-based selection Main result: Exact-match improves 0.375 → 0.75 with stable structure under constrained compute. If you’re eligible to endorse (cs.LG or related), I’d greatly appreciate it. Happy to share endorseme

Ask HN: Learning resources for building AI agents?
I’ve recently gone through several materials, including Antonio Gulli’s AI Agentic Design Patterns, Sam Bhagwat’s Principles of Building AI Agents and Patterns for Building AI Agents, as well as the courses from LangGraph Academy and some content on DataCamp. This space is evolving very quickly, so I’m curious how others here are approaching learning. What resources, courses, papers, or hands-on approaches have you found most useful while building AI agents? Comments URL: https://news.ycombinator.com/item?id=47637083 Points: 2 # Comments: 3
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Research Papers

Multi-fidelity approaches for general constrained Bayesian optimization with application to aircraft design
Aircraft design relies heavily on solving challenging and computationally expensive Multidisciplinary Design Optimization problems. In this context, there has been growing interest in multi-fidelity models for Bayesian optimization to improve the MDO process by balancing computational cost and accuracy through the combination of high- and low-fidelity simulation models, enabling efficient exploration of the design process at a minimal computational effort. In the existing literature, fidelity selection focuses only on the objective function to decide how to integrate multiple fidelity levels, — Oihan Cordelier, Youssef Diouane, Nathalie Bartoli




Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!