Live
Black Hat USAAI BusinessBlack Hat AsiaAI BusinessClaude Now Has 1 Million Token Context. Here’s What That Actually Means for Developers.Medium AIWhy EHR Data Doesn't Fit Neat ML TablesHackernoon AIAI can write code. It just can’t maintain it — About the future of creative workMedium AIMengapa “Smart City” Saja Tidak Cukup: Urgensi Deep Learning Spasiotemporal untuk Pelayanan PublikMedium AIAI for Frontend Developers — Day 18Medium AIThe Discipline of Not Fooling Ourselves: Episode 4 — The Interpreters of the RulesDEV CommunityHow We Used AI Agents to Security-Audit an Open Source ProjectDEV CommunityAI chatbot traffic grows seven times faster than social media but still trails by a factor of fourThe DecoderWhy We Ditched Bedrock Agents for Nova Pro and Built a Custom OrchestratorDEV CommunityStop leaking your .env to AI! I built a Rust/Tauri Secret Manager to inject API keys safely 🛡️DEV CommunityNevaMind AI: Advanced Memory for Proactive AgentsDEV CommunityHow to Switch Industries Without Starting OverDEV CommunityBlack Hat USAAI BusinessBlack Hat AsiaAI BusinessClaude Now Has 1 Million Token Context. Here’s What That Actually Means for Developers.Medium AIWhy EHR Data Doesn't Fit Neat ML TablesHackernoon AIAI can write code. It just can’t maintain it — About the future of creative workMedium AIMengapa “Smart City” Saja Tidak Cukup: Urgensi Deep Learning Spasiotemporal untuk Pelayanan PublikMedium AIAI for Frontend Developers — Day 18Medium AIThe Discipline of Not Fooling Ourselves: Episode 4 — The Interpreters of the RulesDEV CommunityHow We Used AI Agents to Security-Audit an Open Source ProjectDEV CommunityAI chatbot traffic grows seven times faster than social media but still trails by a factor of fourThe DecoderWhy We Ditched Bedrock Agents for Nova Pro and Built a Custom OrchestratorDEV CommunityStop leaking your .env to AI! I built a Rust/Tauri Secret Manager to inject API keys safely 🛡️DEV CommunityNevaMind AI: Advanced Memory for Proactive AgentsDEV CommunityHow to Switch Industries Without Starting OverDEV Community
AI NEWS HUBbyEIGENVECTOREigenvector

Progressive Disclosure: Improving Human-Computer Interaction in AI Products with Less-is-More Philosophy

Dev.to AIby HagicodeApril 5, 202611 min read0 views
Source Quiz

Progressive Disclosure: Improving Human-Computer Interaction in AI Products with "Less-is-More" Philosophy In AI product design, the quality of user input often determines the quality of output. This article shares a "progressive disclosure" interaction solution we practiced in the HagiCode project. Through step-by-step guidance, intelligent completion, and immediate feedback, we transform users' brief and vague inputs into structured technical proposals, significantly improving human-computer interaction efficiency. Background Those working on AI products have likely encountered this scenario: a user opens your application, excitedly types a requirement, but the AI returns completely irrelevant content. It's not that the AI isn't smart—it's simply that the user provided too little informa

Progressive Disclosure: Improving Human-Computer Interaction in AI Products with "Less-is-More" Philosophy

In AI product design, the quality of user input often determines the quality of output. This article shares a "progressive disclosure" interaction solution we practiced in the HagiCode project. Through step-by-step guidance, intelligent completion, and immediate feedback, we transform users' brief and vague inputs into structured technical proposals, significantly improving human-computer interaction efficiency.

Background

Those working on AI products have likely encountered this scenario: a user opens your application, excitedly types a requirement, but the AI returns completely irrelevant content. It's not that the AI isn't smart—it's simply that the user provided too little information. After all, mind-reading isn't something anyone does well.

This phenomenon was particularly evident during our development of HagiCode. HagiCode is an AI-powered code assistant where users describe requirements in natural language to create technical proposals and conversations. In actual usage, we found that user inputs often had these issues:

  • Uneven input quality: Some users only type a few words, like "optimize login" or "fix bug," lacking necessary context

  • Inconsistent technical terminology: Different users use different terms for the same thing—some say "frontend," others say "FE"

  • Missing structured information: No project background, no repository scope, no impact scope—these key pieces are absent

  • Repetitive issues: The same types of requirements appear repeatedly, requiring explanation from scratch each time

The direct consequences of these issues are: AI comprehension difficulties, unstable generated proposal quality, and poor user experience. Users think "this AI isn't good," while we feel wronged too—you only gave one sentence, how am I supposed to guess what you want?

Actually, this can't be helped. After all, understanding between people takes time, let alone between humans and machines.

To address these pain points, we made a bold decision: introduce the "progressive disclosure" design philosophy to improve human-computer interaction. The changes brought by this decision might be greater than you imagine, though we didn't realize it would be so effective at the time.

About HagiCode

The solution shared in this article comes from our practical experience in the HagiCode project. HagiCode is an open-source AI code assistant project designed to help developers complete code writing, technical proposal generation, code review, and other tasks through natural language interaction. Project repository: github.com/HagiCode-org/site.

This progressive disclosure solution was summarized through multiple iterations and optimizations during our actual development process. If you find this solution valuable, it shows our engineering capabilities are pretty good—then HagiCode itself is worth paying attention to, after all, good things are worth sharing.

What is Progressive Disclosure

"Progressive Disclosure" is a design principle originating from the HCI (Human-Computer Interaction) field. The core idea is simple: don't display all information and options to users at once; instead, gradually display necessary content based on user actions and needs.

This principle is particularly well-suited for AI products, because AI interaction is naturally progressive—users say a little, AI understands a little, then supplement a bit more, and understands more. Like communication between people, it has to be gradual—after all, no one bares their heart upon first meeting.

Specifically for HagiCode's scenario, we implemented progressive disclosure in four aspects:

1. Description Optimization Mechanism: Let AI Help You Speak Clearly

When users input brief descriptions, we don't directly let the AI understand them. Instead, we first trigger a "description optimization" process. The core of this process is "structured output"—transforming users' free text into a standard format. Like stringing scattered pearls into a necklace, things don't look so messy.

The optimized description must include the following standard sections:

  • Background: Problem background and context

  • Analysis: Technical analysis and thought process

  • Solution: Solution and implementation steps

  • Practice: Actual code examples and considerations

At the same time, we automatically generate a Markdown table displaying information such as target repository, path, and edit permissions, facilitating subsequent AI operations. After all, with a clear table of contents, finding things is more convenient.

Below is the actual code implementation:

// Core method in ProposalDescriptionMemoryService.cs public async Task OptimizeDescriptionAsync(  string title,  string description,  string locale = "zh-CN",  DescriptionOptimizationMemoryContext? memoryContext = null,  CancellationToken cancellationToken = default) {  // Build query parameters  var queryContext = BuildQueryContext(title, description);

// Retrieve historical context var memoryContext = await RetrieveHistoricalContextAsync(queryContext, cancellationToken);

// Generate structured prompt var prompt = await BuildOptimizationPromptAsync( title, description, memoryContext, cancellationToken);

// Call AI for optimization return await aiService.CompleteAsync(prompt, cancellationToken); }`

Enter fullscreen mode

Exit fullscreen mode

The key to this process is "memory injection"—we inject historical context such as project conventions, similar cases, and negative patterns into the prompt, allowing the AI to reference past experiences when optimizing. After all, you learn from mistakes—past experiences shouldn't go to waste.

Notes:

  • Ensure current input takes priority over historical memory, avoiding overwriting user-specified information

  • HagIndex references must serve as factual sources and cannot be modified by historical cases

  • Low-confidence correction suggestions should not be injected as strong constraints

2. Voice Input Capability: Speaking is More Natural Than Typing

In addition to text input, we also support voice input. This is particularly useful when describing complex requirements—think about it, typing a technical requirement might take several minutes, but speaking might take just a few dozen seconds. The mouth is always faster than the hand.

The design focus of voice input is "state management"—users must clearly understand what state the system is currently in. We defined the following states:

  • Idle: System ready, can start recording

  • Waiting-upstream: Connecting to backend service

  • Recording: Recording user voice

  • Processing: Converting voice to text

  • Error: Error occurred, requires user handling

The frontend state model looks roughly like this:

interface VoiceInputState {  status: 'idle' | 'waiting-upstream' | 'recording' | 'processing' | 'error';  duration: number;  error?: string;  deletedSet: Set; // Fingerprint set of deleted results }

// State transition when starting recording const handleVoiceInputStart = async () => { // First enter waiting state, show loading animation setState({ status: 'waiting-upstream' });

// Wait for backend ready confirmation const isReady = await waitForBackendReady(); if (!isReady) { setState({ status: 'error', error: 'Backend service not ready' }); return; }

// Start recording setState({ status: 'recording', startTime: Date.now() }); };

// Handle recognition results const handleRecognitionResult = (result: RecognitionResult) => { const fingerprint = normalizeFingerprint(result.text);

// Check if already deleted if (state.deletedSet.has(fingerprint)) { return; // Skip deleted content }

// Merge result into text box appendResult(result); };`

Enter fullscreen mode

Exit fullscreen mode

Here's a detail: we use a "fingerprint set" to manage deletion synchronization. When voice recognition returns multiple results, users might delete some of them. We store the fingerprints of deleted content, and if the same content appears later, we automatically skip it. It's like remembering which dishes you don't like—you won't order them again next time. After all, no one wants to be troubled by the same issue twice.

3. Prompt Management System: Externalizing AI's "Brain"

HagiCode has a flexible prompt management system where all prompts are stored as files:

prompts/ ├── metadata/ │ ├── optimize-description.zh-CN.json │ └── optimize-description.en-US.json └── templates/  ├── optimize-description.zh-CN.hbs  └── optimize-description.en-US.hbs

Enter fullscreen mode

Exit fullscreen mode

Each prompt consists of two parts:

  • Metadata file (.json): Defines the prompt's scenario, version, parameters, and other information

  • Template file (.hbs): Actual prompt content using Handlebars syntax

The format of the metadata file is like this:

{  "scenario": "optimize-description",  "locale": "zh-CN",  "version": "1.0.0",  "syntax": "handlebars",  "syntaxVersion": "1.0",  "parameters": [  {  "name": "title",  "type": "string",  "required": true,  "description": "Proposal title"  },  {  "name": "description",  "type": "string",  "required": true,  "description": "Original description"  }  ],  "author": "HagiCode Team",  "description": "Optimize user input technical proposal description",  "lastModified": "2026-04-05",  "tags": ["optimization", "nlp"] }

Enter fullscreen mode

Exit fullscreen mode

The template file uses Handlebars syntax and supports parameter injection:

You are a technical proposal expert.

Generate a structured technical proposal description based on the following information.

{{title}} {{description}} {{#if memoryContext}}

{{memoryContext}}

{{/if}}

Background

[Describe problem background and context, including project information, repository scope, etc.]

Analysis

[Technical analysis and thought process, explaining why this change is needed]

Solution

[Solution and implementation steps, listing key code locations]

Practice

[Actual code examples and considerations] `

Enter fullscreen mode

Exit fullscreen mode

The benefits of this design are:

  • Prompts can be version-managed like code

  • Supports multiple languages, automatically switching based on user preferences

  • Parameterized design, allowing dynamic context injection

  • Completeness validation at startup, avoiding runtime errors

After all, if you don't write down what's in your head, who knows when you'll forget it? Better to record it properly from the start than regret it later.

4. Progressive Wizard: Breaking Complex Tasks into Small Steps

For complex tasks (like first-time installation and configuration), we used a multi-step wizard design. Each step only requests necessary information and provides clear progress indicators. Life is like this too—you can't become fat in one bite, taking it step by step is actually more reliable.

The wizard state model:

interface WizardState {  currentStep: number; // 0-3, corresponding to 4 steps  steps: WizardStep[];  canGoNext: boolean;  canGoBack: boolean;  isLoading: boolean;  error: string | null; }

interface WizardStep { id: number; title: string; description: string; completed: boolean; }

// Step navigation logic const goToNextStep = () => { if (wizardState.currentStep < wizardState.steps.length - 1) { // Validate current step input if (validateCurrentStep()) { wizardState.currentStep++; wizardState.steps[wizardState.currentStep - 1].completed = true; } } };

const goToPreviousStep = () => { if (wizardState.currentStep > 0) { wizardState.currentStep--; } };`

Enter fullscreen mode

Exit fullscreen mode

Each step has independent validation logic, and completed steps have clear visual markers. Cancel operations pop up a confirmation dialog to prevent users from accidentally losing progress. After all, you can turn back if you go the wrong way, but if you tear up the road, there's really no way out.

Summary

Reviewing HagiCode's progressive disclosure practice, we can summarize several core principles:

  • Step-by-step guidance: Break complex tasks into small steps, each requesting only necessary information

  • Intelligent completion: Automatically complete information using historical context and project knowledge

  • Immediate feedback: Every action has clear visual feedback and status indicators

  • Fault tolerance mechanism: Allow users to undo and reset, avoiding irreversible losses from errors

  • Diversified input: Support multiple input methods such as text and voice

The actual effect of this solution in HagiCode is: the average length of user input increased from less than 20 characters to structured 200-300 characters, the quality of AI-generated proposals significantly improved, and user satisfaction also rose.

Actually, this isn't surprising—the more information you provide, the more accurately the AI understands, and the better the returned results. This is no different from communication between people.

If you're also working on AI-related products, I hope these experiences provide some inspiration. Remember: users aren't unwilling to provide information—you just haven't asked the right questions yet. The core of progressive disclosure is finding the optimal timing and way to ask questions—it just takes some patience to explore that timing and method.

References

  • HagiCode project repository: github.com/HagiCode-org/site

  • HagiCode official website: hagicode.com

  • Progressive Disclosure design principle: Wikipedia - Progressive Disclosure

  • Handlebars template engine: handlebarsjs.com

If this article helps you, feel free to give a Star on GitHub and follow the HagiCode project's future development. Public beta has begun—install now to experience full functionality:

  • GitHub: github.com/HagiCode-org/site

  • Official website: hagicode.com

  • Watch the 30-minute practical demo: www.bilibili.com/video/BV1pirZBuEzq/

  • Docker Compose one-click installation: docs.hagicode.com/installation/docker-compose

  • Desktop quick installation: hagicode.com/desktop/

Original Article & License

Thanks for reading. If this article helped, consider liking, bookmarking, or sharing it. This article was created with AI assistance and reviewed by the author before publication.

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by Eigenvector · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

Knowledge Map

Knowledge Map
TopicsEntitiesSource
Progressive…modelversionopen-sourceproductapplicationplatformDev.to AI

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 196 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!