Tesla's robotaxis are reportedly remotely driven by humans, sometimes
In a letter shared with Senator Ed Markey (D-Mass.), Tesla admitted that its robotaxis are sometimes driven remotely by human operators, Wired reports . Competing self-driving car companies sometimes rely on human operators to tell robotaxi software how to get itself unstuck, but letting operators actually drive those cars remotely is more unusual. "As a redundancy measure in rare cases … [remote assistance operators] are authorized to temporarily assume direct vehicle control as the final escalation maneuver after all other available intervention actions have been exhausted,” Karen Steakley, Tesla’s director of public policy and business development, shared in a letter to Markey. In those situations, operators are reportedly able to take over Tesla's robotaxis when they're moving at spe
In a letter shared with Senator Ed Markey (D-Mass.), Tesla admitted that its robotaxis are sometimes driven remotely by human operators, Wired reports. Competing self-driving car companies sometimes rely on human operators to tell robotaxi software how to get itself unstuck, but letting operators actually drive those cars remotely is more unusual.
"As a redundancy measure in rare cases … [remote assistance operators] are authorized to temporarily assume direct vehicle control as the final escalation maneuver after all other available intervention actions have been exhausted,” Karen Steakley, Tesla’s director of public policy and business development, shared in a letter to Markey. In those situations, operators are reportedly able to take over Tesla's robotaxis when they're moving at speeds around 2mph or less, and then drive the car at up to 10mph if software permits it.
Engadget has contacted Tesla to confirm the details shared in Steakley's letter. We'll update the article if we hear back.
As Wired notes, that's a bit different than how other self-driving car companies handle human intervention. For example, Waymo's Driver software can call on human help — Waymo calls them "fleet response" — to offer context and answer questions to help it navigate complicated driving situations. The company claims these workers never drive the robotaxi themselves, but they are able to see the car's environment through its sensors to help it get unstuck. Self-driving car companies typically avoid remote operation, Wired writes, because technical limitations like latency and the limited perspective of a robotaxi’s sensors can make it hard to drive them easily and safely.
Tesla's approach to self-driving has always cut against the grain, though. Whereas competitors continue to rely on a mix of radar and other sensors to navigate, Tesla has exclusively focused on using cameras for its Full Self Driving (FSD) system. The company has also had to deal with a number of high-profile crashes related to FSD, which prompted a probe by the US National Highway Traffic Safety Administration in October 2025.
The company launched its robotaxi service in Austin, Texas in June 2025, in a limited capacity and with human safety drivers sitting in the driver's seat in case of emergency. Tesla is also reportedly testing rides without safety drivers in the same area, which might be why it has contingencies for remote operators to step in.
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
launchavailableupdate
Two-Pass LLM Processing: When Single-Pass Classification Isn't Enough
Here's a pattern I keep running into: you have a batch of items (messages, tickets, documents, transactions) and you need to classify each one. The obvious approach is one LLM call per item. It works fine until it doesn't. The failure mode is subtle. Each item gets classified correctly in isolation. But the relationships between items -- escalation patterns, contradictions, duplicate reports of the same issue -- are invisible to a single-pass classifier because it never sees the full picture. The problem Say you're triaging a CEO's morning messages. Three Slack messages from the same person: 9:15 AM : "API migration 60% done, no blockers" 10:30 AM : "Found an issue with payment endpoints, investigating" 11:45 AM : "3% of live payments failing, need rollback/hotfix decision within an hour"

Claude Code slash commands: the complete reference with custom examples
Claude Code slash commands: the complete reference with custom examples If you've been using Claude Code for more than a week, you've probably typed /help and seen a list of slash commands. But most developers only use /clear and /exit . Here's everything else — and how to build your own. Built-in slash commands Command What it does /help Show all commands /clear Clear conversation context /compact Summarize and compress context /memory Show what Claude remembers /review Request code review /init Initialize CLAUDE.md in current dir /exit or /quit Exit Claude Code /model Switch between Claude models /cost Show token usage and cost /doctor Check your setup The ones you're probably not using /compact vs /clear Most people use /clear when the context gets long. But /compact is usually better:

Interpreting Gradient Routing’s Scalable Oversight Experiment
%TLDR. We discuss the setting that Gradient Routing (GR) paper uses to model Scalable Oversight (SO) . The first part suggests an improved naive baseline using early stopping which performs on-par with GR. In the second part, we compare GR’s setting to SO and Weak-to-Strong generalization (W2SG) , discuss how it might be useful in combination, say that it’s closer to semi-supervised reinforcement learning (SSRL) , and point to some other possible baselines. We think this post would be useful for interpreting Gradient Routing’s SO experiment and for readers who are trying to build intuition about what modern Scalable Oversight work does and does not assume. This post is mainly about two things. First , it’s about the importance of simple baselines. Second , it's about different ways of mode
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Releases

I Built a Zero-Login Postman Alternative in 5 Weeks. My Cofounder Is an AI and I Work Long Shifts.
I started this because I wanted to know if the hype was real. Not the AI hype specifically. The whole thing — the idea that someone without a CS degree, without a team, without anyone around them who even knows what Claude.ai is, could build something real on weekends. I work long demanding shifts at a job that has nothing to do with software. My coworkers don't know what an API is. I barely knew what one was when I started. Five weeks later I have a live product with Stripe payments, a Pro tier, and an AI that generates production-ready API requests from plain English. I'm still not entirely sure what I'd use it for in my day job. But I know the journey was worth it. If you can't learn, you're done. Why This Exists One night I needed to test an API endpoint. I opened Postman. It asked me


HHS Announces Request for Information to Harness Artificial Intelligence to Deflate Health Care Costs and Make America Healthy Again - HHS.gov
HHS Announces Request for Information to Harness Artificial Intelligence to Deflate Health Care Costs and Make America Healthy Again HHS.gov

Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!