Affording Process Auditability with QualAnalyzer: An Atomistic LLM Analysis Tool for Qualitative Research
arXiv:2604.03820v1 Announce Type: new Abstract: Large language models are increasingly used for qualitative data analysis, but many workflows obscure how analytic conclusions are produced. We present QualAnalyzer, an open-source Chrome extension for Google Workspace that supports atomistic LLM analysis by processing each data segment independently and preserving the prompt, input, and output for every unit. Through two case studies -- holistic essay scoring and deductive thematic coding of interview transcripts -- we show that this approach creates a legible audit trail and helps researchers investigate systematic differences between LLM and human judgments. We argue that process auditability is essential for making LLM-assisted qualitative research more transparent and methodologically ro
View PDF HTML (experimental)
Abstract:Large language models are increasingly used for qualitative data analysis, but many workflows obscure how analytic conclusions are produced. We present QualAnalyzer, an open-source Chrome extension for Google Workspace that supports atomistic LLM analysis by processing each data segment independently and preserving the prompt, input, and output for every unit. Through two case studies -- holistic essay scoring and deductive thematic coding of interview transcripts -- we show that this approach creates a legible audit trail and helps researchers investigate systematic differences between LLM and human judgments. We argue that process auditability is essential for making LLM-assisted qualitative research more transparent and methodologically robust.
Comments: 9 pages, 3 figures, BEA2026 Conference Submission
Subjects:
Artificial Intelligence (cs.AI); Computation and Language (cs.CL)
Cite as: arXiv:2604.03820 [cs.AI]
(or arXiv:2604.03820v1 [cs.AI] for this version)
https://doi.org/10.48550/arXiv.2604.03820
arXiv-issued DOI via DataCite (pending registration)
Submission history
From: Max Lu [view email] [v1] Sat, 4 Apr 2026 18:07:05 UTC (1,070 KB)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modellanguage modelannounce
The AI Gaslight
Why Tech Billionaires Are Selling a Utopia to Build an Empire From “vibe coding” tech debt to digital sweatshops — how the AI industry is sacrificing the working class to summon a machine we cannot control. A few weeks ago, I made a very public, very painful admission about building my startup, Nexa. Caught up in the deafening hype of the AI bubble, I stopped writing deep architectural code and started relying entirely on Large Language Models (LLMs) to “vibe code” my MVP. The AI acted like a sycophant. It flattered me. It told me my ideas were brilliant. It made me feel like a 10x engineer. But when real users touched the product, the system choked. Beneath the beautiful UI was a terrifying ocean of unscalable spaghetti code and suppressed errors. I realized the hard way that AI doesn’t m

Building Local AI Agents: A Practical Guide to Models, Memory, and Orchestration
A local AI agent is a system where the model runs on your own hardware, takes actions on your behalf, and maintains context across sessions, all without sending data to an external API. Unlike a simple chatbot that responds and forgets, an agent can reason over multiple steps, use tools, and build up knowledge over time. Running it locally means you get the benefits of an intelligent assistant without the privacy trade-offs or API costs that come with cloud-hosted alternatives. Building a local AI agent requires five layers to work: an LLM layer that runs inference on your hardware, an agent framework that routes and executes actions, a memory layer that makes the agent smarter over time, a storage layer that persists what it learns, and an interface layer that connects it to how you actua
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models

AI That Improves AI: What Happens When Agents Start Rewriting Themselves?
From Darwin Gödel Machine to HyperAgents-understanding how AI is evolving from static models to systems that continuously improve themselves What happens when an AI system is no longer just solving problems but also begins to change itself in order to solve them better? Not retrained. Not fine-tuned.But actively rewriting it’s own code, it’s own workflow and eventually improving the way it improves itself ! At first, it sounds like science fiction. However, the idea of a machine that can modify itself has been discussed for decades, earlier framed as a theoretical construct-something powerful yet out of reach . One of the earliest formulations imagined a system that could rewrite its own code but only after proving that the modification would lead to better performance. Agreed, it was a be

Bounded Autonomy: Controlling LLM Characters in Live Multiplayer Games
arXiv:2604.04703v1 Announce Type: new Abstract: Large language models (LLMs) are bringing richer dialogue and social behavior into games, but they also expose a control problem that existing game interfaces do not directly address: how should LLM characters participate in live multiplayer interaction while remaining executable in the shared game world, socially coherent with other active characters, and steerable by players when needed? We frame this problem as bounded autonomy, a control architecture for live multiplayer games that organizes LLM character control around three interfaces: agent-agent interaction, agent-world action execution, and player-agent steering. We instantiate bounded autonomy with probabilistic reply-chain decay, an embedding-based action grounding pipeline with fa



Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!