Live
Black Hat USAAI BusinessBlack Hat AsiaAI Business跳出幸存者偏差,从结构性资源分配解析财富真相Dev.to AIJapan's Sakura Internet jumps 20% as Microsoft plans $10 billion AI push with SoftBank - CNBCGNews AI JapanMicrosoft plans $10 billion investment in Japan to grow AI, train 1 million workers by 2030 - livemint.comGNews AI JapanOpenClaw vs Cloud AI: Which One Actually Gives Businesses More Control?Medium AI“In a World of AI Content, Being Human Is Your Superpower”Medium AIHow AI is Transforming the Role of a CFO in 2026.Medium AIHow to Build Self-Running AI Tasks with TypeScript (No Cron Jobs Needed)Dev.to AIMicrosoft to invest $10 billion in Japan to expand AI infrastructure and cybersecurity partnerships - Storyboard18GNews AI JapanFaked Fire Drill!Medium AIMicrosoft to invest $10 bn for Japan AI data centres - France 24GNews AI JapanMicrosoft To Invest $10 Bn For Japan AI Data Centres - Barron'sGNews AI Japanv4.3.1text-gen-webui ReleasesBlack Hat USAAI BusinessBlack Hat AsiaAI Business跳出幸存者偏差,从结构性资源分配解析财富真相Dev.to AIJapan's Sakura Internet jumps 20% as Microsoft plans $10 billion AI push with SoftBank - CNBCGNews AI JapanMicrosoft plans $10 billion investment in Japan to grow AI, train 1 million workers by 2030 - livemint.comGNews AI JapanOpenClaw vs Cloud AI: Which One Actually Gives Businesses More Control?Medium AI“In a World of AI Content, Being Human Is Your Superpower”Medium AIHow AI is Transforming the Role of a CFO in 2026.Medium AIHow to Build Self-Running AI Tasks with TypeScript (No Cron Jobs Needed)Dev.to AIMicrosoft to invest $10 billion in Japan to expand AI infrastructure and cybersecurity partnerships - Storyboard18GNews AI JapanFaked Fire Drill!Medium AIMicrosoft to invest $10 bn for Japan AI data centres - France 24GNews AI JapanMicrosoft To Invest $10 Bn For Japan AI Data Centres - Barron'sGNews AI Japanv4.3.1text-gen-webui Releases
AI NEWS HUBbyEIGENVECTOREigenvector

How I Built a Desktop AI App with Tauri v2 + React 19 in 2026

Dev.to AIby DavidApril 2, 20269 min read1 views
Source Quiz

I wanted to build one app that does AI chat, image generation, and video generation — all running locally, no cloud, no Docker, no terminal. Just a .exe you download and run. The result is Locally Uncensored — a React 19 + TypeScript frontend with a Tauri v2 Rust backend that connects to Ollama for chat and ComfyUI for image/video generation. It ships as a standalone desktop app on Windows (.exe/.msi), Linux (.AppImage/.deb), and macOS (.dmg). This post covers the real technical challenges I hit and how I solved them. If you're building a Tauri app that talks to local services, manages large file downloads, or needs to auto-discover software on the user's machine, this is for you. The Stack Frontend : React 19, TypeScript, Tailwind CSS 4, Framer Motion, Zustand Desktop Shell : Tauri v2 (Ru

I wanted to build one app that does AI chat, image generation, and video generation — all running locally, no cloud, no Docker, no terminal. Just a .exe you download and run.

The result is Locally Uncensored — a React 19 + TypeScript frontend with a Tauri v2 Rust backend that connects to Ollama for chat and ComfyUI for image/video generation. It ships as a standalone desktop app on Windows (.exe/.msi), Linux (.AppImage/.deb), and macOS (.dmg).

This post covers the real technical challenges I hit and how I solved them. If you're building a Tauri app that talks to local services, manages large file downloads, or needs to auto-discover software on the user's machine, this is for you.

The Stack

  • Frontend: React 19, TypeScript, Tailwind CSS 4, Framer Motion, Zustand

  • Desktop Shell: Tauri v2 (Rust backend)

  • Build: Vite 8 (dev mode), Tauri CLI (production builds)

  • AI Backends: Ollama (text), ComfyUI (images/video), faster-whisper (voice)

The app runs in two modes: npm run dev serves it in a browser with a Vite dev server, and npm run tauri build compiles it to a native binary with embedded WebView. Both modes need to talk to localhost services (Ollama on port 11434, ComfyUI on port 8188), and that's where things get interesting.

Challenge 1: CORS in a Desktop App

Here's the problem nobody warns you about when building Tauri apps. In dev mode, your React app runs in a browser at localhost:5173. You can proxy requests to Ollama (localhost:11434) and ComfyUI (localhost:8188) through Vite's dev server. Easy.

But in production mode, your app runs inside a WebView with the origin tauri://localhost. And WebViews enforce CORS just like browsers. Ollama and ComfyUI don't set CORS headers. Every single API call fails silently.

The solution: route every localhost request through Rust.

Here's the Rust proxy command that lives in the Tauri backend:

let http_method = method.unwrap_or_else(|| "GET".to_string());

let mut request = match http_method.as_str() { "POST" => client.post(&url), "DELETE" => client.delete(&url), "PUT" => client.put(&url), _ => client.get(&url), };_

if let Some(body_str) = body { request = request .header("Content-Type", "application/json") .body(body_str); }

let resp = request.send().await .map_err(|e| format!("proxy_localhost: {}", e))?;

resp.text().await.map_err(|e| e.to_string()) }`

Enter fullscreen mode

Exit fullscreen mode

On the frontend, I built an abstraction layer that detects whether we're in Tauri or browser mode and routes accordingly:

/**

  • Fetch a localhost URL, bypassing CORS in Tauri mode.
  • In dev mode: normal fetch(). In Tauri: routes through Rust. / export async function localFetch( url: string, options?: { method?: string; body?: string } ): Promise { if (!isTauri()) { return fetch(url, { method: options?.method || "GET", body: options?.body, }); }

const invoke = await getInvoke(); const text = await invoke("proxy_localhost", { url, method: options?.method || "GET", body: options?.body || null, }) as string;

return new Response(text, { status: 200, headers: { "Content-Type": "application/json" }, }); }`

Enter fullscreen mode

Exit fullscreen mode

Every API call in the app goes through localFetch() or its streaming counterpart localFetchStream(). The frontend code doesn't care whether it's running in a browser or a .exe — the abstraction handles it.

I also needed separate proxies for external URLs (CivitAI API, model downloads) and streaming responses (Ollama's chat endpoint returns newline-delimited JSON). That's four proxy commands in total: proxy_localhost, proxy_localhost_stream, fetch_external, and fetch_external_bytes.

Challenge 2: Download Manager with Pause/Resume

Users download AI models that are 2-10 GB. Downloads fail. Internet drops. You can't just start over. The app needed a real download manager.

The Rust backend manages downloads with tokio::spawn for async background tasks and CancellationToken for pause/cancel support. The key insight: pause and cancel both use the same cancellation mechanism, but the download loop checks a status flag to distinguish them.

Enter fullscreen mode

Exit fullscreen mode

Each download gets a CancellationToken. When the user pauses, we set the status to "pausing" then cancel the token. The download loop sees "pausing" and keeps the temp file. When the user cancels, the status stays as-is and we delete the temp file.

Resume support uses HTTP Range headers:

// Resume from where we left off if resume_offset > 0 { request = request.header("Range", format!("bytes={}-", resume_offset)); }`

Enter fullscreen mode

Exit fullscreen mode

The download writes to a .download temp file and only renames it to the final filename when complete. This means a crashed download never leaves a corrupt model file — the app detects the .download file and offers to resume on next launch.

Progress updates go to the frontend at 500ms intervals via a polling endpoint (download_progress), which returns the full state of all active downloads as a JSON map.

Challenge 3: Auto-Discovering ComfyUI

ComfyUI can be installed anywhere. In their home folder, on the Desktop, inside Stability Matrix's AppData folder, on a D: drive. Users don't want to manually enter paths.

I wrote a recursive filesystem scanner in Rust that looks for ComfyUI in a priority order:

  • Environment variable (COMFYUI_PATH) — explicit override

  • App config file — remembered from a previous session

  • Deep scan of home directory — recurse up to 7 levels deep

  • Fixed common locations — ~/ComfyUI, ~/Desktop/ComfyUI, Stability Matrix AppData paths, C:\ComfyUI, D:\ComfyUI, etc.

  • Broad recursive scan — Desktop, Documents, Downloads, and drive roots at 5 levels

The scanner identifies ComfyUI by checking for main.py in each directory named "ComfyUI" (case-insensitive). It skips node_modules, .git, venv, Windows, Program Files, and other irrelevant directories to keep it fast.

_

Enter fullscreen mode

Exit fullscreen mode

Once found, the path is saved to a config file (%APPDATA%/locally-uncensored/config.json) so subsequent launches are instant.

Challenge 4: Auto-Starting Everything

A desktop app should "just work." When you double-click the .exe, it should start Ollama, start ComfyUI, start the Whisper server, and show you the UI. No terminal, no commands.

Tauri's setup hook runs before the window opens:

// Auto-start Ollama commands::process::auto_start_ollama(&state);

// Auto-start ComfyUI (finds it automatically) commands::process::auto_start_comfyui(&state);

// Start Whisper server in background commands::whisper::auto_start_whisper(app.handle(), &state);

Ok(()) })`

Enter fullscreen mode

Exit fullscreen mode

The auto_start_comfyui function first checks if ComfyUI is already running (by pinging localhost:8188), finds the path if not already known, then spawns it as a child process. stdout and stderr get drained in background threads to prevent buffer deadlock — a subtle bug that caused random freezes during early development.

When the app closes, the Drop implementation on AppState kills the ComfyUI process tree (using taskkill /T /F on Windows to kill child processes too) and stops the Whisper server. No orphan processes.

Challenge 5: The Backend Abstraction Layer

The trickiest architectural decision was making every feature work in both dev mode (browser) and production mode (Tauri .exe). The backend.ts module maps Tauri commands to Vite API endpoints:

// Dev mode: map command name to /local-api/ endpoint const endpointMap: Record = { start_comfyui: { path: "/local-api/start-comfyui", method: "POST" }, comfyui_status: { path: "/local-api/comfyui-status" }, download_model: { path: "/local-api/download-model", method: "POST" }, download_progress: { path: "/local-api/download-progress" }, // ... 20+ more commands };

const endpoint = endpointMap[command]; const res = await fetch(endpoint.path, { method: endpoint.method || "GET" }); return res.json(); }`

Enter fullscreen mode

Exit fullscreen mode

In dev mode, a Vite middleware plugin implements all these endpoints using Node.js. In production, the Rust backend handles them natively. The React components call backendCall("download_model", { url, subfolder, filename }) and never know the difference.

What I'd Do Differently

True streaming in Tauri. The current proxy_localhost_stream command buffers the entire response before returning it. For Ollama chat, this means you don't see tokens arriving in real-time in the .exe build (they all arrive at once). Tauri v2 supports events for this, but it requires a different architecture. I'd build a proper event-based streaming layer from day one.

Unified process management. I ended up with separate code for starting Ollama, ComfyUI, and Whisper. They all follow the same pattern: check if running, find the binary, spawn, drain stdio, store the handle. A generic ManagedProcess struct would clean this up.

CSP configuration. Tauri's Content Security Policy for the WebView was painful to get right. You need to whitelist every domain you might fetch from (CivitAI, Hugging Face, Ollama), every localhost port, every WebSocket URL. I ended up with a massive CSP string in tauri.conf.json. Start a CSP allowlist early and add to it as you go.

The Result

Locally Uncensored ships as a single .exe that auto-starts Ollama + ComfyUI, auto-discovers models, and gives you chat + image + video generation with 25+ personas in one UI. No Docker, no terminal, no Node.js.

The Tauri binary is under 15 MB. The Rust backend handles CORS proxying, file downloads with pause/resume, process lifecycle management, filesystem scanning, and code execution for the AI agent feature. It does all of this with zero external runtime dependencies.

If you're building a desktop app that talks to local services, I'd strongly recommend Tauri v2. The #[tauri::command] system is excellent for bridging Rust and TypeScript. Just be prepared for the CORS proxy pattern — you'll need it for anything that talks to localhost.

The project is open source under MIT: github.com/PurpleDoubleD/locally-uncensored

Have questions about the Tauri architecture or hit similar challenges? Drop a comment or open a discussion on GitHub.

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by Eigenvector · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

Knowledge Map

Knowledge Map
TopicsEntitiesSource
How I Built…llamamodellaunchupdateopen sourceproductDev.to AI

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Building knowledge graph…

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!