Live
Black Hat USAAI BusinessBlack Hat AsiaAI BusinessHow AI and Alternative Data Are Finally Making Germany's Hidden Champions Accessible to Global InvestorsDev.to AIThe Simple Truth About AI Agent RevenueDev.to AIAI Transformation in German SMEs: McKinsey Data Shows Up to 10x ROI from Strategic AI IntegrationDev.to AIAutomating Your Urban Farm with AI: From Guesswork to PrecisionDev.to AIThe Real Ceiling in Claude Code's Memory System (It’s Not the 200-Line Cap)Dev.to AIThe Invisible Rhythms of the Siuntio FortDev.to AIXYRONIXDEV CommunityExploring RAG Embedding Techniques in DepthDev.to AIHow I Built a Multi-Agent Geopolitical Simulator with FastAPI + LiteLLMDev.to AI90% людей используют нейросети как поисковик. И проигрывают.Dev.to AII Let AI Coding Agents Build My Side Projects for a Month — Here's My Honest TakeDev.to AI# Understanding Data Modeling in PowerBI: Joins, Relationship and Schemas.DEV CommunityBlack Hat USAAI BusinessBlack Hat AsiaAI BusinessHow AI and Alternative Data Are Finally Making Germany's Hidden Champions Accessible to Global InvestorsDev.to AIThe Simple Truth About AI Agent RevenueDev.to AIAI Transformation in German SMEs: McKinsey Data Shows Up to 10x ROI from Strategic AI IntegrationDev.to AIAutomating Your Urban Farm with AI: From Guesswork to PrecisionDev.to AIThe Real Ceiling in Claude Code's Memory System (It’s Not the 200-Line Cap)Dev.to AIThe Invisible Rhythms of the Siuntio FortDev.to AIXYRONIXDEV CommunityExploring RAG Embedding Techniques in DepthDev.to AIHow I Built a Multi-Agent Geopolitical Simulator with FastAPI + LiteLLMDev.to AI90% людей используют нейросети как поисковик. И проигрывают.Dev.to AII Let AI Coding Agents Build My Side Projects for a Month — Here's My Honest TakeDev.to AI# Understanding Data Modeling in PowerBI: Joins, Relationship and Schemas.DEV Community
AI NEWS HUBbyEIGENVECTOREigenvector

I Analyzed 500 AI Coding Mistakes and Built an ESLint Plugin to Catch Them

DEV Communityby Rob SimpsonApril 4, 20265 min read2 views
Source Quiz

Here's a pattern you've probably seen: const results = items . map ( async ( item ) => { return await fetchItem ( item ); }); Looks fine, right? Your AI assistant wrote it. Tests pass. Code review approves it. Then production hits, and results is an array of Promises — not the values you expected. The await on line 2 does nothing. You needed Promise.all(items.map(...)) or a for...of loop. This isn't a TypeScript bug. It's a common LLM coding mistake — one of hundreds I found when I started researching AI-generated code quality. The Problem: AI Writes Code That Works, Not Code That's Right LLMs are excellent at writing code that passes tests. They're terrible at writing code that handles edge cases, maintains consistency, and follows best practices under the hood. After reviewing several em

Here's a pattern you've probably seen:

const results = items.map(async (item) => {  return await fetchItem(item); });

Enter fullscreen mode

Exit fullscreen mode

Looks fine, right? Your AI assistant wrote it. Tests pass. Code review approves it.

Then production hits, and results is an array of Promises — not the values you expected. The await on line 2 does nothing. You needed Promise.all(items.map(...)) or a for...of loop.

This isn't a TypeScript bug. It's a common LLM coding mistake — one of hundreds I found when I started researching AI-generated code quality.

The Problem: AI Writes Code That Works, Not Code That's Right

LLMs are excellent at writing code that passes tests. They're terrible at writing code that handles edge cases, maintains consistency, and follows best practices under the hood.

After reviewing several empirical studies on LLM-generated code bugs — including an analysis of 333 bugs and PromptHub's study of 558 incorrect snippets — I found clear patterns emerging:

Bug Type Frequency

Missing corner cases 15.3%

Misinterpretations 20.8%

Hallucinated objects/APIs 9.6%

Incorrect conditions High

Missing code blocks 40%+

The most frustrating part? Many of these are preventable at lint time.

The Solution: ESLint Rules Designed for AI-Generated Code

I built eslint-plugin-llm-core — an ESLint plugin with 20 rules specifically designed to catch the mistakes AI coding assistants make most often.

Not just generic best practices, but patterns I've seen repeatedly in AI-generated codebases:

  • Async/await misuse

  • Inconsistent error handling

  • Missing null checks

  • Magic numbers instead of named constants

  • Deep nesting instead of early returns

  • Empty catch blocks that swallow errors

  • Generic variable names that obscure intent

Example: The Async Array Callback Trap

// ❌ AI often writes this const userIds = users.map(async (user) => {  return await db.getUser(user.id); }); // userIds is Promise[] — not User[]

// ✅ What you actually need const userIds = await Promise.all( users.map((user) => db.getUser(user.id)) );`

Enter fullscreen mode

Exit fullscreen mode

The plugin catches this with no-async-array-callbacks:

57:27 error Avoid passing async functions to array methods llm-core/no-async-array-callbacks

This pattern returns an array of Promises, not the resolved values. Consider using Promise.all() or a for...of loop instead.`

Enter fullscreen mode

Exit fullscreen mode

Notice the error message? It's designed to teach, not just complain. The goal is to help developers (and their AI assistants) understand why it's wrong.

Example: The Empty Catch Anti-Pattern

// ❌ AI often generates this try {  await processData(data); } catch (e) {  // TODO: handle error }

Enter fullscreen mode

Exit fullscreen mode

The no-empty-catch rule catches this:

63:11 error Empty catch block silently swallows errors llm-core/no-empty-catch

Unhandled errors make debugging difficult and can hide critical failures. Either handle the error, rethrow it, or log it with context.`

Enter fullscreen mode

Exit fullscreen mode

Example: Deep Nesting Instead of Early Returns

// ❌ AI loves nesting function processData(data: Data | null) {  if (data) {  if (data.items) {  if (data.items.length > 0) {  return data.items.map(processItem);  }  }  }  return []; }

// ✅ Early returns are cleaner function processData(data: Data | null) { if (!data?.items?.length) return []; return data.items.map(processItem); }`

Enter fullscreen mode

Exit fullscreen mode

The prefer-early-return rule encourages the flatter pattern.

The Research Behind the Rules

Each rule is backed by observed patterns in LLM-generated code:

Rule Bug Pattern Addressed

no-async-array-callbacks Missing Promise.all, incorrect async flow

no-empty-catch Silent error swallowing

no-magic-numbers Unmaintainable constants

prefer-early-return Deep nesting, unclear control flow

prefer-unknown-in-catch

any typed catch params

throw-error-objects Throwing strings instead of Error instances

structured-logging Inconsistent log formats

consistent-exports Mixed default/named exports

explicit-export-types Missing return types on public functions

no-commented-out-code Dead code accumulation

Full rule documentation: github.com/pertrai1/eslint-plugin-llm-core

Why Not Just Use typescript-eslint?

Great question. typescript-eslint is excellent — this plugin is designed to complement it, not replace it.

The difference is focus:

typescript-eslint eslint-plugin-llm-core

Focus TypeScript language correctness AI coding pattern prevention

Error messages Technical, spec-focused Educational, context-rich

Rule design Language spec compliance Observed LLM bug patterns

You should use both. typescript-eslint catches TypeScript-specific issues. llm-core catches patterns that LLMs repeatedly get wrong — regardless of whether they're technically valid TypeScript.

Getting Started

npm install -D eslint-plugin-llm-core

Enter fullscreen mode

Exit fullscreen mode

// eslint.config.js import llmCore from 'eslint-plugin-llm-core';

export default [ { plugins: { 'llm-core': llmCore, }, rules: { ...llmCore.configs.recommended.rules, }, }, ];`

Enter fullscreen mode

Exit fullscreen mode

That's it. Zero config for the recommended ruleset.

The Bigger Picture: Teaching AI Better Habits

Here's the interesting part: these rules don't just catch mistakes. They teach.

When your AI assistant sees the error messages:

Avoid passing async functions to array methods. This pattern returns an array of Promises, not the resolved values. Consider using Promise.all() or a for...of loop instead.

Enter fullscreen mode

Exit fullscreen mode

It learns. Next time, it writes the correct pattern.

In looped agent workflows — where AI iteratively writes, tests, and fixes code — this feedback loop compounds. Each lint error becomes a teaching moment.

What's Next

The plugin is early but functional. Current focus areas:

  • Auto-fixes for fixable rules

  • More logging library detection (Pino, Winston, Bunyan)

  • Additional rules based on ongoing research

  • Evidence gathering on whether rules actually improve AI-generated code quality

If you're working with AI coding assistants — Cursor, Claude Code, Copilot, or others — I'd love your feedback on what patterns you've seen them get wrong.

Try It

npm install -D eslint-plugin-llm-core

Enter fullscreen mode

Exit fullscreen mode

GitHub: pertrai1/eslint-plugin-llm-core

npm: eslint-plugin-llm-core

Built this? Hate it? Have ideas for rules I missed? Open an issue or reach out. I'm actively looking for contributors who've seen AI write weird code.

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by Eigenvector · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

More about

claudeproductcopilot

Knowledge Map

Knowledge Map
TopicsEntitiesSource
I Analyzed …claudeproductcopilotassistantanalysisreviewDEV Communi…

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 158 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!