Plausible Code Is the New Technical Debt
<p>I have a take that is going to annoy two groups of people at the same time:</p> <ul> <li>The “real engineers don’t use AI” crowd </li> <li>The “AI wrote my whole app” crowd </li> </ul> <p>Here it is:</p> <p>If AI is in your workflow, your codebase is now a human factors problem.</p> <p>Not a model problem.</p> <p>Not a prompt problem.</p> <p>A human problem.</p> <p>Because the hardest part is no longer generating code.</p> <p>The hardest part is knowing what to trust, what to delete, what to keep, and what you are willing to be responsible for at 2:00 AM when prod is on fire and the person who “helped” is a chat bubble with no pager.</p> <h2> The new sin is not bad code. It’s unowned code. </h2> <p>AI makes it easy to produce code that looks plausible.</p> <p>That’s the trap.</p> <p>Pla
I have a take that is going to annoy two groups of people at the same time:
-
The “real engineers don’t use AI” crowd
-
The “AI wrote my whole app” crowd
Here it is:
If AI is in your workflow, your codebase is now a human factors problem.
Not a model problem.
Not a prompt problem.
A human problem.
Because the hardest part is no longer generating code.
The hardest part is knowing what to trust, what to delete, what to keep, and what you are willing to be responsible for at 2:00 AM when prod is on fire and the person who “helped” is a chat bubble with no pager.
The new sin is not bad code. It’s unowned code.
AI makes it easy to produce code that looks plausible.
That’s the trap.
Plausible is not correct. Plausible is not maintainable. Plausible is not secure. Plausible is not even consistent with your repo.
Plausible just means your brain gets a quick dopamine hit and says: “ship it.”
So here’s the controversial thing I think we should start saying out loud:
If you did not read it, you did not write it.
If you did not write it, you do not own it.
If you do not own it, it does not belong in main.
That’s not anti-AI. That’s pro-software.
“But I can read it later”
No you won’t.
You will merge it while it’s fresh. Then a week later you will forget you even asked for it. Then three months later it will fail in a weird edge case and you will be in a code archaeology session, scrolling through a file full of polite variable names and zero intent.
AI code has a smell.
Not because it is always bad.
Because it often has no story.
Human-written code usually has fingerprints:
-
slightly annoying but consistent naming
-
weird shortcuts taken for a specific reason
-
comments that reflect real pain
-
a mental model that shows up across files
AI code often looks clean but detached, like it was written by someone who will never have to maintain it.
Which is true.
The real cost is not bugs. It’s ambiguity.
Bugs are normal. We have tests. We have monitoring. We have rollbacks.
Ambiguity is poison.
Ambiguity is when you can’t tell:
-
what the function is supposed to guarantee
-
what failure looks like
-
what the invariants are
-
why a decision was made
-
what tradeoff was chosen
AI generates code faster than it generates intent.
So if you are using AI and you are not also increasing clarity, you are building a repo that will eventually punish you.
The “AI pair programmer” fantasy is incomplete
Most devs use AI like a hyperactive junior.
“Write me a thing.”
It writes a thing.
You merge the thing.
That is not pairing.
Pairing is: reasoning out loud, constraints, tradeoffs, and a shared model of the system.
So the only way AI becomes a legitimate pair is if you force it to act like one.
Which means you need to change what you ask for.
Instead of: “write the code”
Ask:
-
“Before you write anything, tell me what you think I’m trying to do.”
-
“List assumptions you are making about the system.”
-
“Propose 2 approaches and argue for one.”
-
“Tell me how this fails.”
-
“Write tests first.”
-
“Show me the minimal diff that gets us there.”
If the tool cannot explain itself, it is not helping. It is performing.
A rule that saved me from shipping garbage
I started doing something that feels almost too simple:
Every AI-generated change must come with a receipt.
Not a comment block of fluff.
A receipt like:
-
What problem is this solving, in one sentence?
-
What are the inputs and outputs, explicitly?
-
What are the invariants?
-
What are the failure modes?
-
What tests prove it?
-
What did we choose not to do, and why?
If I cannot answer those, I do not merge.
Because I know what happens otherwise.
I get fast today and slow forever.
“This is just good engineering, nothing new”
Exactly.
That’s the point.
AI did not change what good engineering is.
It changed how easy it is to accidentally do bad engineering.
It lowered the effort required to create complexity.
So we need friction in the right places.
Not bureaucracy.
Friction that forces ownership.
Practical patterns (non-hype, actually usable)
Here are a few patterns that make AI helpful without letting it rot your repo:
Use it for diffs, not features
Ask for the smallest change that moves you forward, then iterate.
Make it write tests and edge cases
Not because it’s perfect, but because it will often suggest failure modes you forgot to consider.
Make it explain the code to you like you are tired
If it can’t do that, it’s too complex or too hand-wavy to merge.
Keep a “kill switch” mindset
Prefer designs you can remove in one commit if it turns out to be wrong.
Treat generated code as untrusted input
Same posture as copy-pasting from Stack Overflow, but faster and more frequent.
The part people avoid: responsibility
This is the emotional part for me.
A lot of us got into software because it felt like a clean meritocracy: you ship, it works, you win.
AI blurs the line between “I built this” and “I assembled this.”
That can mess with your identity.
So some devs swing into denial: “I don’t use it, I’m pure.”
Other devs swing into cosplay: “AI built everything, I’m 10x.”
Both are insecurity.
The mature posture is boring:
Use it. Verify it. Own it.
Your future self will thank you.
A question I want to ask the Dev.to crowd
What is your “AI code ownership” rule right now?
Do you have a hard line like “no generated code without tests” or “no generated code without a design note”?
Or are you just vibing and hoping future you figures it out?
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modelfeaturereasoningMy most common advice for junior researchers
Written quickly as part of the Inkhaven Fellowship . At a high level, research feedback I give to more junior research collaborators often can fall into one of three categories: Doing quick sanity checks Saying precisely what you want to say Asking why one more time In each case, I think the advice can be taken to an extreme I no longer endorse. Accordingly, I’ve tried to spell out the degree to which you should implement the advice, as well as what “taking it too far” might look like. This piece covers doing quick sanity checks, which is the most common advice I give to junior researchers. I’ll cover the other two pieces of advice in a subsequent piece. Doing quick sanity checks Research is hard (almost by definition) and people are often wrong. Every researcher has wasted countless hours
Open Source Project of the Day (Part 27): Awesome AI Coding - A One-Stop AI Programming Resource Navigator
<h2> Introduction </h2> <blockquote> <p>"AI coding tools and resources are scattered everywhere. A topically organized, searchable, contributable list can save enormous amounts of search time."</p> </blockquote> <p>This is Part 27 of the "Open Source Project of the Day" series. Today we explore <strong>Awesome AI Coding</strong> (<a href="https://github.com/chendongqi/awesome-ai-coding" rel="noopener noreferrer">GitHub</a>).</p> <p>When doing AI-assisted programming, you'll face questions like: which editor or terminal tool should I use? For multi-agent frameworks, should I pick MetaGPT or CrewAI? What RAG frameworks and vector databases are available? Where do I find MCP servers? What ready-made templates are there for Claude Code Rules and Skills? <strong>Awesome AI Coding</strong> is ex
Parameter Count Is the Worst Way to Pick a Model on 8GB VRAM
<h1> Parameter Count Is the Worst Way to Pick a Model on 8GB VRAM </h1> <p>I've been running local LLMs on an RTX 4060 8GB for six months. Qwen2.5-32B, Qwen3.5-9B/27B/35B-A3B, BGE-M3 — all crammed through Q4_K_M quantization. One thing I can say with certainty:</p> <p><strong>Parameter count is the worst metric for model selection.</strong></p> <p>Online comparisons rank models by size — "32B gives this quality," "7B gives that." Benchmarks like MMLU and HumanEval publish rankings by parameter count. But those assume abundant VRAM. On 8GB, parameter count fails to predict the actual experience.</p> <p>This article covers three rules I derived from real measurements, plus a decision framework for 8GB VRAM model selection. All data comes from <a href="https://qiita.com/plasmon" rel="noopener
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products
Open Source Project of the Day (Part 27): Awesome AI Coding - A One-Stop AI Programming Resource Navigator
<h2> Introduction </h2> <blockquote> <p>"AI coding tools and resources are scattered everywhere. A topically organized, searchable, contributable list can save enormous amounts of search time."</p> </blockquote> <p>This is Part 27 of the "Open Source Project of the Day" series. Today we explore <strong>Awesome AI Coding</strong> (<a href="https://github.com/chendongqi/awesome-ai-coding" rel="noopener noreferrer">GitHub</a>).</p> <p>When doing AI-assisted programming, you'll face questions like: which editor or terminal tool should I use? For multi-agent frameworks, should I pick MetaGPT or CrewAI? What RAG frameworks and vector databases are available? Where do I find MCP servers? What ready-made templates are there for Claude Code Rules and Skills? <strong>Awesome AI Coding</strong> is ex
Building Real-Time Features in React Without WebSocket Libraries
<h1> Building Real-Time Features in React Without WebSocket Libraries </h1> <p>When developers hear "real-time," they reach for WebSocket libraries. Socket.IO, Pusher, Ably -- the ecosystem is full of them. But many real-time features do not need bidirectional communication. A stock ticker, a notification feed, a deployment log, a live sports score -- all of these are one-directional streams from server to client. For these use cases, the browser already has a built-in protocol that is simpler, lighter, and automatically reconnects: <strong>Server-Sent Events (SSE)</strong>.</p> <p>Combine SSE with the Network Information API for connection awareness, and the BroadcastChannel API for cross-tab coordination, and you have a complete real-time toolkit -- zero WebSocket libraries required. In
How to Auto-Index Your URLs with Google Search Console API
<p><a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgbrezb3lxo2w93fcb370.png" class="article-body-image-wrapper"><img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgbrezb3lxo2w93fcb370.png" alt=" " width="800" height="260"></a></p> <p>Stop waiting weeks for Google to discover your pages. Learn how to use Google's Indexing API, URL Inspection API, and Search Console API to automate URL submission and track indexing status — with daily rate limits explained.</p> <p>If your website has hundreds or thousands of pages — product listings,
Claude Code Architecture Explained: Agent Loop, Tool System, and Permission Model (Rust Rewrite Analysis)
<h2> Claude Code Deep Dive (Part 1): Architecture Overview and the Core Agent Loop </h2> <p>Claude Code’s leaked source code weighs in at over <strong>510,000 lines of TypeScript</strong>—far too large to analyze directly.</p> <p>Interestingly, a community-driven Rust rewrite reduced that complexity to around <strong>20,000 lines</strong>, while still preserving the core functionality.</p> <p>Starting from this simplified version makes one thing much clearer:</p> <blockquote> <p>What does an AI agent system <em>actually need</em> to work?</p> </blockquote> <h2> Why Start with the Rust Rewrite? </h2> <p>On March 31, 2026, Claude Code’s full source was unintentionally exposed due to an npm packaging mistake.</p> <p>The package <code>@anthropic-ai/claude-code v2.1.88</code> included a <strong

Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!