Live
Black Hat USADark ReadingBlack Hat AsiaAI BusinessI Brute-Forced 2 Million Hashes to Get a Shiny Legendary Cat in My Terminal. It Has Max SNARK and a Propeller Hat.DEV CommunityHave to do enough for my talk, "Is AI Getting Reports Wrong? Try Google LookML, Your Data Dictionary!" at Google NEXT 2026DEV CommunityTaming the Ingredient Sourcing Nightmare with AI AutomationDEV Community# 🚀 How to Build a High-Performance Landing Page with Next.js 15 and Tailwind v4DEV CommunityClaude Code Architecture Explained: Agent Loop, Tool System, and Permission Model (Rust Rewrite Analysis)DEV CommunityThe Data Structure That's Okay With Being WrongDEV CommunityHow to Auto-Index Your URLs with Google Search Console APIDEV CommunityThe Indestructible FutureLessWrong AIBuilding Real-Time Features in React Without WebSocket LibrariesDEV CommunityChatGPT Maker OpenAI Valued at $852B After Record $122B Funding Round - Bitcoin.com NewsGoogle News: ChatGPTParameter Count Is the Worst Way to Pick a Model on 8GB VRAMDEV CommunityTreeline, which is building an AI and software-first alternative to legacy corporate IT systems, raised a $25M Series A led by Andreessen Horowitz (Lily Mae Lazarus/Fortune)TechmemeBlack Hat USADark ReadingBlack Hat AsiaAI BusinessI Brute-Forced 2 Million Hashes to Get a Shiny Legendary Cat in My Terminal. It Has Max SNARK and a Propeller Hat.DEV CommunityHave to do enough for my talk, "Is AI Getting Reports Wrong? Try Google LookML, Your Data Dictionary!" at Google NEXT 2026DEV CommunityTaming the Ingredient Sourcing Nightmare with AI AutomationDEV Community# 🚀 How to Build a High-Performance Landing Page with Next.js 15 and Tailwind v4DEV CommunityClaude Code Architecture Explained: Agent Loop, Tool System, and Permission Model (Rust Rewrite Analysis)DEV CommunityThe Data Structure That's Okay With Being WrongDEV CommunityHow to Auto-Index Your URLs with Google Search Console APIDEV CommunityThe Indestructible FutureLessWrong AIBuilding Real-Time Features in React Without WebSocket LibrariesDEV CommunityChatGPT Maker OpenAI Valued at $852B After Record $122B Funding Round - Bitcoin.com NewsGoogle News: ChatGPTParameter Count Is the Worst Way to Pick a Model on 8GB VRAMDEV CommunityTreeline, which is building an AI and software-first alternative to legacy corporate IT systems, raised a $25M Series A led by Andreessen Horowitz (Lily Mae Lazarus/Fortune)Techmeme

Plausible Code Is the New Technical Debt

DEV Communityby Jonathan MurrayApril 2, 20266 min read0 views
Source Quiz

<p>I have a take that is going to annoy two groups of people at the same time:</p> <ul> <li>The “real engineers don’t use AI” crowd </li> <li>The “AI wrote my whole app” crowd </li> </ul> <p>Here it is:</p> <p>If AI is in your workflow, your codebase is now a human factors problem.</p> <p>Not a model problem.</p> <p>Not a prompt problem.</p> <p>A human problem.</p> <p>Because the hardest part is no longer generating code.</p> <p>The hardest part is knowing what to trust, what to delete, what to keep, and what you are willing to be responsible for at 2:00 AM when prod is on fire and the person who “helped” is a chat bubble with no pager.</p> <h2> The new sin is not bad code. It’s unowned code. </h2> <p>AI makes it easy to produce code that looks plausible.</p> <p>That’s the trap.</p> <p>Pla

I have a take that is going to annoy two groups of people at the same time:

  • The “real engineers don’t use AI” crowd

  • The “AI wrote my whole app” crowd

Here it is:

If AI is in your workflow, your codebase is now a human factors problem.

Not a model problem.

Not a prompt problem.

A human problem.

Because the hardest part is no longer generating code.

The hardest part is knowing what to trust, what to delete, what to keep, and what you are willing to be responsible for at 2:00 AM when prod is on fire and the person who “helped” is a chat bubble with no pager.

The new sin is not bad code. It’s unowned code.

AI makes it easy to produce code that looks plausible.

That’s the trap.

Plausible is not correct. Plausible is not maintainable. Plausible is not secure. Plausible is not even consistent with your repo.

Plausible just means your brain gets a quick dopamine hit and says: “ship it.”

So here’s the controversial thing I think we should start saying out loud:

If you did not read it, you did not write it.

If you did not write it, you do not own it.

If you do not own it, it does not belong in main.

That’s not anti-AI. That’s pro-software.

“But I can read it later”

No you won’t.

You will merge it while it’s fresh. Then a week later you will forget you even asked for it. Then three months later it will fail in a weird edge case and you will be in a code archaeology session, scrolling through a file full of polite variable names and zero intent.

AI code has a smell.

Not because it is always bad.

Because it often has no story.

Human-written code usually has fingerprints:

  • slightly annoying but consistent naming

  • weird shortcuts taken for a specific reason

  • comments that reflect real pain

  • a mental model that shows up across files

AI code often looks clean but detached, like it was written by someone who will never have to maintain it.

Which is true.

The real cost is not bugs. It’s ambiguity.

Bugs are normal. We have tests. We have monitoring. We have rollbacks.

Ambiguity is poison.

Ambiguity is when you can’t tell:

  • what the function is supposed to guarantee

  • what failure looks like

  • what the invariants are

  • why a decision was made

  • what tradeoff was chosen

AI generates code faster than it generates intent.

So if you are using AI and you are not also increasing clarity, you are building a repo that will eventually punish you.

The “AI pair programmer” fantasy is incomplete

Most devs use AI like a hyperactive junior.

“Write me a thing.”

It writes a thing.

You merge the thing.

That is not pairing.

Pairing is: reasoning out loud, constraints, tradeoffs, and a shared model of the system.

So the only way AI becomes a legitimate pair is if you force it to act like one.

Which means you need to change what you ask for.

Instead of: “write the code”

Ask:

  • “Before you write anything, tell me what you think I’m trying to do.”

  • “List assumptions you are making about the system.”

  • “Propose 2 approaches and argue for one.”

  • “Tell me how this fails.”

  • “Write tests first.”

  • “Show me the minimal diff that gets us there.”

If the tool cannot explain itself, it is not helping. It is performing.

A rule that saved me from shipping garbage

I started doing something that feels almost too simple:

Every AI-generated change must come with a receipt.

Not a comment block of fluff.

A receipt like:

  • What problem is this solving, in one sentence?

  • What are the inputs and outputs, explicitly?

  • What are the invariants?

  • What are the failure modes?

  • What tests prove it?

  • What did we choose not to do, and why?

If I cannot answer those, I do not merge.

Because I know what happens otherwise.

I get fast today and slow forever.

“This is just good engineering, nothing new”

Exactly.

That’s the point.

AI did not change what good engineering is.

It changed how easy it is to accidentally do bad engineering.

It lowered the effort required to create complexity.

So we need friction in the right places.

Not bureaucracy.

Friction that forces ownership.

Practical patterns (non-hype, actually usable)

Here are a few patterns that make AI helpful without letting it rot your repo:

Use it for diffs, not features

Ask for the smallest change that moves you forward, then iterate.

Make it write tests and edge cases

Not because it’s perfect, but because it will often suggest failure modes you forgot to consider.

Make it explain the code to you like you are tired

If it can’t do that, it’s too complex or too hand-wavy to merge.

Keep a “kill switch” mindset

Prefer designs you can remove in one commit if it turns out to be wrong.

Treat generated code as untrusted input

Same posture as copy-pasting from Stack Overflow, but faster and more frequent.

The part people avoid: responsibility

This is the emotional part for me.

A lot of us got into software because it felt like a clean meritocracy: you ship, it works, you win.

AI blurs the line between “I built this” and “I assembled this.”

That can mess with your identity.

So some devs swing into denial: “I don’t use it, I’m pure.”

Other devs swing into cosplay: “AI built everything, I’m 10x.”

Both are insecurity.

The mature posture is boring:

Use it. Verify it. Own it.

Your future self will thank you.

A question I want to ask the Dev.to crowd

What is your “AI code ownership” rule right now?

Do you have a hard line like “no generated code without tests” or “no generated code without a design note”?

Or are you just vibing and hoping future you figures it out?

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by AI News Hub · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

More about

modelfeaturereasoning

Knowledge Map

Knowledge Map
TopicsEntitiesSource
Plausible C…modelfeaturereasoningDEV Communi…

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 231 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!

More in Products