Executing as You Generate: Hiding Execution Latency in LLM Code Generation
Hey there, little explorer! Imagine you're building with LEGOs. 🧱
Usually, a super-smart robot brain (that's the AI!) would first plan ALL the LEGO pieces it needs, then it would start putting them together. That takes a little extra time because it has to wait for the whole plan.
But this new idea is like a super speedy robot! As soon as it picks up a LEGO piece, it tries to put it in place right away, even while it's still thinking about the next piece. 🚀
This makes the robot build its LEGO castle much, much faster! It's like building and thinking at the same time, so you don't have to wait so long for your cool new things to be made. Yay for speed! 🎉
Parallel execution paradigm for LLM-based coding agents reduces latency by executing code during generation rather than in sequential stages. (1 upvotes on HuggingFace)
Published on Apr 1
·
Submitted by
v587su
on Apr 3
Authors:
,
,
,
,
,
Abstract
Parallel execution paradigm for LLM-based coding agents reduces latency by executing code during generation rather than in sequential stages.
AI-generated summary
Current LLM-based coding agents follow a serial execution paradigm: the model first generates the complete code, then invokes an interpreter to execute it. This sequential workflow leaves the executor idle during generation and the generator idle during execution, resulting in unnecessary end-to-end latency. We observe that, unlike human developers, LLMs produce code tokens sequentially without revision, making it possible to execute code as it is being generated. We formalize this parallel execution paradigm, modeling it as a three-stage pipeline of generation, detection, and execution, and derive closed-form latency bounds that characterize its speedup potential and operating regimes. We then present Eager, a concrete implementation featuring AST-based chunking, dynamic batching with gated execution, and early error interruption. We evaluate Eager across four benchmarks, seven LLMs, and three execution environments. Results show that Eager reduces the non-overlapped execution latency by up to 99.9% and the end-to-end latency by up to 55% across seven LLMs and four benchmarks.
View arXiv page View PDF Add to collection
Models citing this paper 0
No model linking this paper
Cite arxiv.org/abs/2604.00491 in a model README.md to link it from this page.
Datasets citing this paper 0
No dataset linking this paper
Cite arxiv.org/abs/2604.00491 in a dataset README.md to link it from this page.
Spaces citing this paper 0
No Space linking this paper
Cite arxiv.org/abs/2604.00491 in a Space README.md to link it from this page.
Collections including this paper 0
No Collection including this paper
Add this paper to a collection to link it from this page.
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Research Papers
![[R] ICML Anonymized git repos for rebuttal](https://d2xsxph8kpxj0f.cloudfront.net/310419663032563854/konzwo8nGf8Z4uZsMefwMr/default-img-graph-nodes-a2pnJLpyKmDnxKWLd5BEAb.webp)
[R] ICML Anonymized git repos for rebuttal
A number of the papers I'm reviewing for have submitted additional figures and code through anonymized git repos (e.g. https://anonymous.4open.science/ ) to help supplement their rebuttal. Is this against any policy? I'm considering submitting additional graphs during the discussion phase for clarity, and would like to make sure that won't cause any issues submitted by /u/drahcirenoob [link] [comments]
![[D] Is research in semantic segmentation saturated?](https://d2xsxph8kpxj0f.cloudfront.net/310419663032563854/konzwo8nGf8Z4uZsMefwMr/default-img-microchip-RD7Ub6Tkp8JwbZxSThJdV5.webp)
[D] Is research in semantic segmentation saturated?
Nowadays I dont see a lot of papers addressing 2D semantic segmentation problem statements be it supervised, semi-supervised, domain adaptation. Is the problem statement saturated? Are there any promising research directions in segmentation except open-set segmentation? submitted by /u/Hot_Version_6403 [link] [comments]


![How Customers Are Using AI Search [2025 Research] - Bain & Company](https://d2xsxph8kpxj0f.cloudfront.net/310419663032563854/konzwo8nGf8Z4uZsMefwMr/default-img-robot-hand-JvPW6jsLFTCtkgtb97Kys5.webp)

Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!