What distinguishes great engineers when AI writes the code?
Article URL: https://elicit.com/blog/engineering-interviews-in-the-era-of-agents Comments URL: https://news.ycombinator.com/item?id=47618548 Points: 1 # Comments: 0
Two years ago we published our stance on coding assistants in interviews. That post's core question—does the candidate understand the code?—still holds, but there has obviously been a seismic shift in the underlying landscape since then.
All of our engineers now spend most of their day steering teams of agents, not typing code out manually.
Our interviews were shifting gradually to match this change in habits, then transformed rapidly and discontinuously over the last few months. This post describes what candidates can expect in our interview process now and offers some takeaways that might be helpful to other hiring managers.
From comprehend to command
The framing I gave in the last post was about comprehension. Has the candidate shown that they have a solid grasp of what the code is doing—even if they didn't write every character?
It's no longer enough to be able to follow the models: candidates are distinguished by being able to lead them.
A great candidate will demonstrate their ability to steer AI effectively, make good product and technical decisions, and know when and how to intervene.
This is additive requirement, not a replacement. It's still essential to understand the code—but that's not where gathering our most interesting insights into candidate aptitude any more.*
Some interviews are still analogue
There are some interviews where we explicitly ask candidates not to use AI. These include:
- Values and culture. This is an essentially human conversation: we are trying to get a sense of your motivations and how you communicate.
- Product sense. Again an interview which explores how you communicate, but focussed on technical topics in a more hands-on setting.
- System design. Our interview is based on past experience, rather than hypothetical challenges, so it's not really a fit for AI help.†
- Some basic coding tasks. We use toy coding problems early in the process for some of our roles—checking baseline language fluency.
Of these examples, I would expect the values & culture conversation to be AI-free for the longest.
Where AI fluency is the point
The interviews where we want you to use AI tend to be deeper technical sessions. You're on your own machine, using whatever tools you're comfortable with. None of these challenges demand tons of tokens but in the future we would reimburse inference costs for candidates.
What we're looking for
We're assessing the a few core attributes:
- Prioritisation. There's more to do than time allows. What do you tackle first, and is that a conscious choice?
- Decision quality. We don't need you to make the same call we would. We need you to make a crisp decision and be able to explain why.
- Spotting risk. When is something safe to assume? When do you need to stop and check?
- Steering, not just prompting. Are you thinking about the how and why of the work, or just feeding the spec to an agent and hoping for the best?
- Productive workflow. Great candidates will take advantage of multiple agents running concurrently—and have a method to keep track of everything.
- Code quality. AI slop—generated code that hasn't been reviewed, shaped, or understood—is obviously a red flag.
What good looks like
One candidate spent the first ten minutes reading the spec, pairing with an agent to sketch out an approach in a markdown file—explicitly noting which requirements they'd tackle first and which they'd defer.
Before writing any code, they flagged a gap in the specification—something that wasn't stated but would meaningfully affect the implementation—and documented their assumption in a QUESTIONS.md file. While they were working on the plan, they had agents researching things like frameworks and libraries they might use, and had a critic agent poke holes in their work as they went.
By the time they kicked off Claude Code to start the implementation, they could provide a tight, focused plan rather than the raw spec. They didn't end up finishing every requirement, but their deliberate choices inspired confidence, they could explain each one, and they'd surfaced an ambiguity that most candidates sailed past without noticing.
What doesn't work
We've seen the opposite pattern too: candidates who yolo claude --dangerously-skip-permissions and blithely accept every suggestion without reviewing the diff. The result may kind-of-work, but it's not the artefacts which speak volumes in these cases: the lack of care, indifference to detail, and hubris make it easy to reject these candidates.
What we've learned
We've been running these interviews for a few months now. Some observations:
Plan-first thinking is rarer than we expected. A surprising number of candidates jump straight into having the model write code without spending time on an approach. The basics of effective agent-assisted coding—something like "plan, review, implement, review"—aren't as widespread as you'd assume.
Candidates often accept model output uncritically. Rapidly skimming a big diff and saying "looks good" when there's no realistic chance they've reviewed it is more common than we'd like. The ability to meaningfully review AI-generated code is an increasingly important engineering skill, and its absence is conspicuous.
AI is creeping into non-coding interviews too. We've encountered candidates using real-time AI to help answer screening questions, AI filters to alter their appearance on video calls, and seemingly parroting pre-generated questions from an LLM.
The old format is obsolete. Your favourite coding tool could solve most of the technical tasks in our 2-year old interview within a couple of minutes.
What's still unresolved
We'd be lying if we said we had this all figured out. Some open questions:
The treadmill problem. I created a new take-home challenge last year, only for Opus 4.5 to be released and basically one-shot it. Calibrating difficulty to "just beyond what models can comfortably do" is setting yourself up for constant rework. We think it's better to structure interviews so they inherently require nuanced skills—prioritisation, taste, communication—which are more resilient to model advances.
Should we assess code review directly? An increasing proportion of engineering work involves reviewing model-generated plans and code. Why not measure that skill directly? We're not doing this yet, but we'll probably start doing it in the next few months.
The role itself is shifting. Reviewing code, evaluating technical designs, applying product sense, pragmatic prioritisation, exercising judgement over agent output—these skills feel increasingly central to the job. Our interviews don't yet measure all of them explicitly, but we're actively thinking about how to close that gap.
If this sounds like you
If you're the kind of engineer who thinks hard about how to work with AI—not just how to prompt it—we'd like to talk!
- One day—perhaps soon—it won't be so important to understand the code. It seems likely that models will reach a capability level which renders our attempts to spot-check their Python about as useful as spot-checking a compiler's assembly. In almost all cases: a waste of time at best.
† It's increasingly questionable to conduct a conventional "whiteboard an architecture for this problem" interview. I care more about a candidate's ability to explore a solution space and synthesise ideas rather than possess rote knowledge of patterns. We are already co-authoring system designs with agents in reality—let's exercise those skills at interview-time.
Hacker News AI Top
https://elicit.com/blog/engineering-interviews-in-the-era-of-agentsSign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
agent
Adobe and NVIDIA Announce Strategic Partnership to Deliver the Next Generation of Firefly Models and Creative, Marketing and Agentic Workflows - NVIDIA Newsroom
Adobe and NVIDIA Announce Strategic Partnership to Deliver the Next Generation of Firefly Models and Creative, Marketing and Agentic Workflows NVIDIA Newsroom
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Self-Evolving AI

Adobe and NVIDIA Announce Strategic Partnership to Deliver the Next Generation of Firefly Models and Creative, Marketing and Agentic Workflows - NVIDIA Newsroom
Adobe and NVIDIA Announce Strategic Partnership to Deliver the Next Generation of Firefly Models and Creative, Marketing and Agentic Workflows NVIDIA Newsroom




Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!