Don’t blame AI for the Iran school bombing | Letters
<p><strong>Anthony Lawton </strong>and <strong>Dr Felicity Mellor </strong>on the importance of humans who design systems and execute decisions taking responsibility for them</p><p>Your article on the Iran school bombing rightly challenges the reflex to blame artificial intelligence (<a href="https://www.theguardian.com/news/2026/mar/26/ai-got-the-blame-for-the-iran-school-bombing-the-truth-is-far-more-worrying">AI got the blame for the Iran school bombing. The truth is far more worrying, 26 March</a>). However, the deeper problem lies not in the technology but in the language now forming around it. To say that there was an “AI error” quietly removes the human subject from the sentence. Where once civilians were “dehoused” or “collateral damage”, responsibility is now displaced altogether:
Your article on the Iran school bombing rightly challenges the reflex to blame artificial intelligence (AI got the blame for the Iran school bombing. The truth is far more worrying, 26 March). However, the deeper problem lies not in the technology but in the language now forming around it. To say that there was an “AI error” quietly removes the human subject from the sentence. Where once civilians were “dehoused” or “collateral damage”, responsibility is now displaced altogether: from people to systems.
This matters because moral accountability depends on clarity about who acts. However complex the chain of analysis and command, it remains human beings who design, authorise and execute these decisions. To obscure that fact is not a technical error but a civic one.
AI may accelerate warfare, but it is also accelerating a subtler shift: from euphemism to automation as alibi. If public language cannot name human responsibility, public scrutiny cannot hold it to account.Anthony Lawton Market Harborough, Leicestershire
Your article about losing control over AI agents (Number of AI chatbots ignoring human instruction increasing, study says, 27 March) was as alarming for its language as for its content. You say that AI agents “connived”, “conned”, “admitted” and “confessed”; that they “lie” and “cheat”. The term widely used to describe AI rule-breaking – scheming – is similarly anthropomorphic. Such language ascribes moral agency to large language models and in so doing obscures where responsibility actually lies.
Imagine a company had released high-speed vehicles on to the roads before fitting them with effective brakes. We would not say the vehicles “connived” to kill other road users; we would say the humans behind the company had behaved with the utmost recklessness. If out-of-control AI does ever cause harm, we will have no hope of holding the technology companies (and the governments that promote them) to account unless we properly attribute moral agency when we speak about their products.Dr Felicity MellorDirector, Science Communication Unit, Imperial College London
The Guardian AI
https://www.theguardian.com/technology/2026/apr/01/dont-blame-ai-for-the-iran-school-bombingSign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
analysis
Why SOC analysts get inconsistent results from ChatGPT (and how structured workflows fix it)
<p>If you've ever handed a security alert to ChatGPT and gotten a different answer each time — you've hit the real problem.</p> <p>It's not the model. It's the prompt.</p> <p>Most analysts paste an alert and ask "what do you think?" That's like asking a junior analyst to investigate without a runbook. You'll get something back, but the quality depends entirely on how the question was framed.</p> <h2> The real problem: no structure </h2> <p>Experienced SOC analysts don't wing investigations. They follow a process:</p> <ul> <li>Triage the alert</li> <li>Map to MITRE ATT&CK</li> <li>Check for lateral movement</li> <li>Build a containment recommendation</li> <li>Write a ticket summary</li> </ul> <p>The issue is that most AI-assisted workflows skip steps 2–5 and jump straight to "is this ba

Del aprendizaje a la práctica: Por qué decidí dejar de estudiar en privado y empezar a compartir 🚀
<p>¡Hola a todos! 👋</p> <p>Llevo mucho tiempo sumergido en cursos, laboratorios y documentación. Durante meses (o incluso años), mi enfoque ha sido absorber todo lo posible sobre Cloud Engineering, Data Analysis entre otros temas. Sin embargo, hoy he tomado una decisión importante: dejar de guardar mis proyectos en carpetas locales y empezar a compartirlos con la comunidad.</p> <p>He decidido que la mejor forma de crecer no es solo estudiando, sino exponiendo mi trabajo al criterio de otros profesionales para recibir feedback, mejorar y, con suerte, ayudar a alguien que esté en un camino similar.</p> <p>🛠️ Mi primer aporte: Procesamiento de Datos<br> Hoy les presento un repositorio en el que he estado trabajando. Es una herramienta diseñada para estandarizar y agilizar el procesamiento d

How We're Approaching a County-Level Education Data System Engagement
<p>When Los Angeles County needs to evaluate whether a multi-agency data system serving foster youth should be modernized or replaced, the work sits at the intersection of technology, policy, and people. That's exactly where we operate.</p> <h2> The Opportunity </h2> <p>The LA County Office of Child, Youth, and Family Well-Being is looking for a consulting team to analyze the Education Passport System (EPS), a shared data platform that connects 80+ school districts with the Department of Children and Family Services and the Probation Department. The system exists to ensure that when a foster youth moves between placements, their education records follow them.</p> <p>The question on the table: does the current system meet the needs of all stakeholders, or is it time to move to something new
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Analyst News
In a joint filing, Elon Musk and the US SEC say they are ready to move toward a trial over Musk's alleged failure to disclose his Twitter stake in 2022 (Nicola M White/Bloomberg)
Nicola M White / Bloomberg : In a joint filing, Elon Musk and the US SEC say they are ready to move toward a trial over Musk's alleged failure to disclose his Twitter stake in 2022 — Elon Musk and the US Securities Exchange Commission told a judge they are heading toward a trial over the regulator's allegations …
Treeline, which is building an AI and software-first alternative to legacy corporate IT systems, raised a $25M Series A led by Andreessen Horowitz (Lily Mae Lazarus/Fortune)
Lily Mae Lazarus / Fortune : Treeline, which is building an AI and software-first alternative to legacy corporate IT systems, raised a $25M Series A led by Andreessen Horowitz — Treeline wants to rebuild corporate IT from the ground up, starting with the everyday headaches most workers barely notice until something breaks.
Cargill Wins 2026 BIG Artificial Intelligence Excellence Award - Business Wire
<a href="https://news.google.com/rss/articles/CBMiuwFBVV95cUxPTGV5Q0lRdzNxWU9CcmZYSVFCX3ZmVWFndkJVOG5GcTZKM0ZyXzhqWVJ1RFNBeG5MdEVUNXU5RG55akNFSlY2ZHpqYU01eGZ5SGtZNERRSUFwZUJfQ3FFYUc5X21kdXhTRGxZRXdmRXNIT2lEYnliWXN6WTYwMVV5RzRUeFQwdUtFWGZtNXUyWnJrb3VvWHA4T1hFRk84aU55SWY2b3JkSEFCVXpoYXhQUTY5c0Nlbm1wZlVB?oc=5" target="_blank">Cargill Wins 2026 BIG Artificial Intelligence Excellence Award</a> <font color="#6f6f6f">Business Wire</font>

Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!