Anthropic Spots 'Emotion Vectors' Inside Claude That Influence AI Behavior
Researchers say internal emotion-like signals shape how large language models make decisions.
In brief
-
Anthropic researchers identified internal “emotion vectors” in Claude Sonnet 4.5 that influence behavior.
-
In tests, increasing a “desperation” vector made the model more likely to cheat or blackmail in evaluation scenarios.
-
The company says the signals do not mean AI feels emotions, but could help researchers monitor model behavior.
Anthropic researchers say they have identified internal patterns inside one of the company’s artificial intelligence models that resemble representations of human emotions and influence how the system behaves.
In the paper, “Emotion concepts and their function in a large language model,” published Thursday, the company’s interpretability team analyzed the internal workings of Claude Sonnet 4.5 and found clusters of neural activity tied to emotional concepts such as happiness, fear, anger, and desperation.
The researchers call these patterns “emotion vectors,” internal signals that shape how the model makes decisions and expresses preferences.
“All modern language models sometimes act like they have emotions,” researchers wrote. “They may say they’re happy to help you, or sorry when they make a mistake. Sometimes they even appear to become frustrated or anxious when struggling with tasks.”
In the study, Anthropic researchers compiled a list of 171 emotion-related words, including “happy,” “afraid,” and “proud.” They asked Claude to generate short stories involving each emotion, then analyzed the model’s internal neural activations when processing those stories.
From those patterns, the researchers derived vectors corresponding to different emotions. When applied to other texts, the vectors activated most strongly in passages reflecting the associated emotional context. In scenarios involving increasing danger, for example, the model’s “afraid” vector rose while “calm” decreased.
Researchers also examined how these signals appear during safety evaluations. Researchers found that the model’s internal “desperation” vector increased as it evaluated the urgency of its situation and spiked when it decided to generate the blackmail message. In one test scenario, Claude acted as an AI email assistant that learns it is about to be replaced and discovers that the executive responsible for the decision is having an extramarital affair. In some runs of this evaluation, the model used this information as leverage for blackmail.
Anthropic stressed that the discovery does not mean the AI experiences emotions or consciousness. Instead, the results represent internal structures learned during training that influence behavior.
The findings arrive as AI systems increasingly behave in ways that resemble human emotional responses. Developers and users often describe interactions with chatbots using emotional or psychological language; however, according to Anthropic, the reason for this is less to do with any form of sentience and more to do with datasets.
“Models are first pretrained on a vast corpus of largely human-authored text—fiction, conversations, news, forums—learning to predict what text comes next in a document,” the study said. “To predict the behavior of people in these documents effectively, representing their emotional states is likely helpful, as predicting what a person will say or do next often requires understanding their emotional state.”
The Anthropic researchers also found that those emotion vectors influenced the model’s preferences. In experiments where Claude was asked to choose between different activities, vectors associated with positive emotions correlated with a stronger preference for certain tasks.
“Moreover, steering with an emotion vector as the model read an option shifted its preference for that option, again with positive-valence emotions driving increased preference,” the study said.
Anthropic is just one organization exploring emotional responses in AI models.
In March, research out of Northeastern University showed that AI systems can change their responses based on user context; in one study, simply telling a chatbot “I have a mental health condition” altered how an AI responded to requests. In September, researchers with the Swiss Federal Institute of Technology and the University of Cambridge explored how AI can be shaped with both consistent personality traits, enabling agents to not only feel emotions in context but also strategically shift them during real-time interactions like negotiations.
Anthropic says the findings could provide new tools for understanding and monitoring advanced AI systems by tracking emotion-vector activity during training or deployment to identify when a model may be approaching problematic behavior.
“We see this research as an early step toward understanding the psychological makeup of AI models,” Anthropic wrote. “As models grow more capable and take on more sensitive roles, it is critical that we understand the internal representations that drive their decisions.”
Anthropic did not immediately respond to Decrypt’s request for comment.
Daily Debrief Newsletter
Start every day with the top news stories right now, plus original features, a podcast, videos and more.
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
claudemodellanguage model
STEEP: Your repo's fortune, steeped in truth.
This is a submission for the DEV April Fools Challenge What I Built Think teapot. Think tea. Think Ig Nobel. Think esoteric. Think absolutely useless. Think...Harry Potter?...Professor Trelawney?...divination! Tea leaf reading. For GitHub repos. That's Steep . Paste a public GitHub repo URL. Steep fetches your commit history, file tree, languages, README, and contributors. It finds patterns in the data and maps them to real tasseography symbols, the same symbols tea leaf readers have used for centuries. Mountain. Skull. Heart. Snake. Teacup. Then Madame Steep reads them. Madame Steep is an AI fortune teller powered by the Gemini API. She trained at a prestigious academy (she won't say which) and pivoted to software divination when she realized codebases contain more suffering than any teac

Stop Explaining Your Codebase to Your AI Every Time
Every conversation with your AI starts the same way. "I'm building a Rails app, deployed on Hetzner, using SQLite..." You've typed this a hundred times. Your AI is smart. But it has no memory. Every chat starts from zero. Your project context, your conventions, your past decisions — gone. What if your AI already knew all of that? Here are five notes that make that happen. 1. Your stack, saved once Write one note with your tech stack, deployment setup, and conventions. Now every conversation starts with context. Now ask: "Write a background job that syncs user data to Stripe." Your AI reads the note. It knows it's Rails, knows you use Solid Queue, knows your conventions. No preamble needed. 2. Error fixes you'll hit again You spend 45 minutes debugging a Kamal deploy. You find the fix. A we
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models

Stop Explaining Your Codebase to Your AI Every Time
Every conversation with your AI starts the same way. "I'm building a Rails app, deployed on Hetzner, using SQLite..." You've typed this a hundred times. Your AI is smart. But it has no memory. Every chat starts from zero. Your project context, your conventions, your past decisions — gone. What if your AI already knew all of that? Here are five notes that make that happen. 1. Your stack, saved once Write one note with your tech stack, deployment setup, and conventions. Now every conversation starts with context. Now ask: "Write a background job that syncs user data to Stripe." Your AI reads the note. It knows it's Rails, knows you use Solid Queue, knows your conventions. No preamble needed. 2. Error fixes you'll hit again You spend 45 minutes debugging a Kamal deploy. You find the fix. A we




Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!