I Analyzed 500 AI Coding Mistakes and Built an ESLint Plugin to Catch Them
Here's a pattern you've probably seen: const results = items . map ( async ( item ) => { return await fetchItem ( item ); }); Looks fine, right? Your AI assistant wrote it. Tests pass. Code review approves it. Then production hits, and results is an array of Promises — not the values you expected. The await on line 2 does nothing. You needed Promise.all(items.map(...)) or a for...of loop. This isn't a TypeScript bug. It's a common LLM coding mistake — one of hundreds I found when I started researching AI-generated code quality. The Problem: AI Writes Code That Works, Not Code That's Right LLMs are excellent at writing code that passes tests. They're terrible at writing code that handles edge cases, maintains consistency, and follows best practices under the hood. After reviewing several em
Here's a pattern you've probably seen:
const results = items.map(async (item) => { return await fetchItem(item); });const results = items.map(async (item) => { return await fetchItem(item); });Enter fullscreen mode
Exit fullscreen mode
Looks fine, right? Your AI assistant wrote it. Tests pass. Code review approves it.
Then production hits, and results is an array of Promises — not the values you expected. The await on line 2 does nothing. You needed Promise.all(items.map(...)) or a for...of loop.
This isn't a TypeScript bug. It's a common LLM coding mistake — one of hundreds I found when I started researching AI-generated code quality.
The Problem: AI Writes Code That Works, Not Code That's Right
LLMs are excellent at writing code that passes tests. They're terrible at writing code that handles edge cases, maintains consistency, and follows best practices under the hood.
After reviewing several empirical studies on LLM-generated code bugs — including an analysis of 333 bugs and PromptHub's study of 558 incorrect snippets — I found clear patterns emerging:
Bug Type Frequency
Missing corner cases 15.3%
Misinterpretations 20.8%
Hallucinated objects/APIs 9.6%
Incorrect conditions High
Missing code blocks 40%+
The most frustrating part? Many of these are preventable at lint time.
The Solution: ESLint Rules Designed for AI-Generated Code
I built eslint-plugin-llm-core — an ESLint plugin with 20 rules specifically designed to catch the mistakes AI coding assistants make most often.
Not just generic best practices, but patterns I've seen repeatedly in AI-generated codebases:
-
Async/await misuse
-
Inconsistent error handling
-
Missing null checks
-
Magic numbers instead of named constants
-
Deep nesting instead of early returns
-
Empty catch blocks that swallow errors
-
Generic variable names that obscure intent
Example: The Async Array Callback Trap
// ❌ AI often writes this const userIds = users.map(async (user) => { return await db.getUser(user.id); }); // userIds is Promise[] — not User[]// ❌ AI often writes this const userIds = users.map(async (user) => { return await db.getUser(user.id); }); // userIds is Promise[] — not User[]// ✅ What you actually need const userIds = await Promise.all( users.map((user) => db.getUser(user.id)) );`
Enter fullscreen mode
Exit fullscreen mode
The plugin catches this with no-async-array-callbacks:
57:27 error Avoid passing async functions to array methods llm-core/no-async-array-callbacks
This pattern returns an array of Promises, not the resolved values. Consider using Promise.all() or a for...of loop instead.`
Enter fullscreen mode
Exit fullscreen mode
Notice the error message? It's designed to teach, not just complain. The goal is to help developers (and their AI assistants) understand why it's wrong.
Example: The Empty Catch Anti-Pattern
// ❌ AI often generates this try { await processData(data); } catch (e) { // TODO: handle error }// ❌ AI often generates this try { await processData(data); } catch (e) { // TODO: handle error }Enter fullscreen mode
Exit fullscreen mode
The no-empty-catch rule catches this:
63:11 error Empty catch block silently swallows errors llm-core/no-empty-catch
Unhandled errors make debugging difficult and can hide critical failures. Either handle the error, rethrow it, or log it with context.`
Enter fullscreen mode
Exit fullscreen mode
Example: Deep Nesting Instead of Early Returns
// ❌ AI loves nesting function processData(data: Data | null) { if (data) { if (data.items) { if (data.items.length > 0) { return data.items.map(processItem); } } } return []; }// ❌ AI loves nesting function processData(data: Data | null) { if (data) { if (data.items) { if (data.items.length > 0) { return data.items.map(processItem); } } } return []; }// ✅ Early returns are cleaner function processData(data: Data | null) { if (!data?.items?.length) return []; return data.items.map(processItem); }`
Enter fullscreen mode
Exit fullscreen mode
The prefer-early-return rule encourages the flatter pattern.
The Research Behind the Rules
Each rule is backed by observed patterns in LLM-generated code:
Rule Bug Pattern Addressed
no-async-array-callbacks
Missing Promise.all, incorrect async flow
no-empty-catch
Silent error swallowing
no-magic-numbers
Unmaintainable constants
prefer-early-return
Deep nesting, unclear control flow
prefer-unknown-in-catch
any typed catch params
throw-error-objects
Throwing strings instead of Error instances
structured-logging
Inconsistent log formats
consistent-exports
Mixed default/named exports
explicit-export-types
Missing return types on public functions
no-commented-out-code
Dead code accumulation
Full rule documentation: github.com/pertrai1/eslint-plugin-llm-core
Why Not Just Use typescript-eslint?
Great question. typescript-eslint is excellent — this plugin is designed to complement it, not replace it.
The difference is focus:
typescript-eslint eslint-plugin-llm-core
Focus TypeScript language correctness AI coding pattern prevention
Error messages Technical, spec-focused Educational, context-rich
Rule design Language spec compliance Observed LLM bug patterns
You should use both. typescript-eslint catches TypeScript-specific issues. llm-core catches patterns that LLMs repeatedly get wrong — regardless of whether they're technically valid TypeScript.
Getting Started
npm install -D eslint-plugin-llm-core
Enter fullscreen mode
Exit fullscreen mode
// eslint.config.js import llmCore from 'eslint-plugin-llm-core';// eslint.config.js import llmCore from 'eslint-plugin-llm-core';export default [ { plugins: { 'llm-core': llmCore, }, rules: { ...llmCore.configs.recommended.rules, }, }, ];`
Enter fullscreen mode
Exit fullscreen mode
That's it. Zero config for the recommended ruleset.
The Bigger Picture: Teaching AI Better Habits
Here's the interesting part: these rules don't just catch mistakes. They teach.
When your AI assistant sees the error messages:
Avoid passing async functions to array methods. This pattern returns an array of Promises, not the resolved values. Consider using Promise.all() or a for...of loop instead.Avoid passing async functions to array methods. This pattern returns an array of Promises, not the resolved values. Consider using Promise.all() or a for...of loop instead.Enter fullscreen mode
Exit fullscreen mode
It learns. Next time, it writes the correct pattern.
In looped agent workflows — where AI iteratively writes, tests, and fixes code — this feedback loop compounds. Each lint error becomes a teaching moment.
What's Next
The plugin is early but functional. Current focus areas:
-
Auto-fixes for fixable rules
-
More logging library detection (Pino, Winston, Bunyan)
-
Additional rules based on ongoing research
-
Evidence gathering on whether rules actually improve AI-generated code quality
If you're working with AI coding assistants — Cursor, Claude Code, Copilot, or others — I'd love your feedback on what patterns you've seen them get wrong.
Try It
npm install -D eslint-plugin-llm-core
Enter fullscreen mode
Exit fullscreen mode
GitHub: pertrai1/eslint-plugin-llm-core
npm: eslint-plugin-llm-core
Built this? Hate it? Have ideas for rules I missed? Open an issue or reach out. I'm actively looking for contributors who've seen AI write weird code.
DEV Community
https://dev.to/pertrai1/i-analyzed-500-ai-coding-mistakes-and-built-an-eslint-plugin-to-catch-them-jmeSign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
claudeproductcopilot
quarkus-chat-ui: A Web Front-End for LLMs, and a Real-World Case for POJO-actor
Note: This article was originally published on SciVicsLab . quarkus-chat-ui: A Web Front-End for LLMs, and a Real-World Case for POJO-actor quarkus-chat-ui is a web UI for LLMs where multiple instances can talk to each other — built as a real-world use case for POJO-actor . Each quarkus-chat-ui instance exposes an HTTP MCP server at /mcp , so Instance A can call tools on Instance B, and Instance B can reply by calling tools back on A. The LLM backend — Claude Code CLI, Codex, or a local model via claw-code-local — acts as an MCP client that can reach these endpoints. The question was how to wire that up over HTTP, and how to handle the fact that LLM responses take tens of seconds and arrive as a stream. quarkus-chat-ui is the bridge that makes this work. Each instance wraps one LLM backend

Building eCourses: A Community‑First LMS SaaS (and Why You Should Build in Public)
I’m building a Learning Management System SaaS called eCourses , designed specifically for small communities and independent educators who feel priced out or over‑engineered by existing platforms. This post is the first in a series where I’ll walk through the architecture, decisions, and “lessons learned” from shipping an LMS from scratch — in public, open source, and on a tight budget. Why I Built eCourses Most LMS platforms are either: Too expensive for solo creators and small communities. Too complex for simple “course + modules + lessons + live sessions” workflows. Too rigid to let instructors experiment with their own teaching style. I wanted something that: Feels native to communities (not just single instructors). Scales technically and financially under $10/month at reasonable load

Qodo vs Cody (Sourcegraph): AI Code Review Compared (2026)
Quick Verdict Qodo and Sourcegraph Cody are both AI tools for software teams, but they solve fundamentally different problems. Qodo is a code quality platform - it reviews pull requests automatically, finds bugs through a multi-agent architecture, and generates tests to fill coverage gaps without being asked. Cody is a codebase-aware AI coding assistant - it understands your entire repository and helps developers navigate, generate, and understand code through conversation and inline completions. Choose Qodo if: your team needs automated PR review that runs on every pull request without prompting, you want proactive test generation that closes coverage gaps systematically, you work on GitLab or Azure DevOps alongside GitHub, or the open-source transparency of PR-Agent matters to your organ
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.





Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!