AI Giant Anthropic Files to Launch 'AnthroPAC' Amid Clash With Trump Administration
Claude developer Anthropic registered an employee-funded PAC amid a legal battle with the White House and rising election-year scrutiny of AI.
In brief
-
Anthropic has filed with the FEC to create an employee-funded political action committee called AnthroPAC.
-
The move follows a dispute with the Trump administration over military use of the Claude AI model.
-
The filing shows how AI companies are preparing to engage more directly in U.S. politics.
Artificial intelligence giant Anthropic has filed paperwork with the Federal Election Commission to create a political action committee, signaling a deeper move into U.S. politics as the fight over AI policy and its own ongoing battle with the White House intensifies.
The San Francisco-based company registered the Anthropic PBC Political Action Committee, known as AnthroPAC, in a filing on Friday. The committee is structured as a separate segregated fund tied to the company, and authorized to make political donations funded by employee contributions. According to a report by Bloomberg, those contributions are capped at $5,000 per employee.
Employee-funded political action committees (PACs) allow companies to collect voluntary contributions from employees and distribute those funds to candidates and political committees.
Other tech companies that have established political PACs include Google, Microsoft, and Amazon. In 2024, those three PACs alone contributed more than $2.3 million to U.S. political candidates, according to campaign finance data by the nonprofit research group OpenSecrets. While contributions went to both Republicans and Democrats, donations skewed toward GOP candidates during the 2024 campaign season.
Anthropic’s move comes during an escalating conflict with President Donald Trump’s administration over the military use of its AI systems.
In February, Trump ordered federal agencies to stop using Anthropic’s technology following a dispute between the company and the Pentagon over how the military could deploy its Claude AI model. Despite an ultimatum by the U.S. Department of Defense, Anthropic refused Pentagon demands to remove safeguards that prohibit the system from being used for mass domestic surveillance or fully autonomous lethal weapons.
In March, Anthropic filed a federal lawsuit challenging the government’s decision to label the company a national security “supply chain risk,” a designation that barred Pentagon contractors from doing business with the firm. The company argued the move was retaliation for its refusal to loosen restrictions on military uses of its AI.
Last week, U.S. District Judge Rita Lin issued a preliminary injunction blocking enforcement of the designation, finding the government’s actions likely violated Anthropic’s First Amendment and due process rights.
Anthropic has not publicly addressed the establishment of the PAC. Still, it comes as artificial intelligence legislation is a growing issue in Washington ahead of the U.S. midterm elections, and underscores how AI developers hope to influence policy going into 2027. In February, a report by CNBC said that in 2026, Anthropic gave $20 million in donations to Public First Action, a group supporting efforts to develop AI safeguards.
Anthropic did not immediately respond to a request for comment by Decrypt.
Daily Debrief Newsletter
Start every day with the top news stories right now, plus original features, a podcast, videos and more.
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
claudelaunchlegal
STEEP: Your repo's fortune, steeped in truth.
This is a submission for the DEV April Fools Challenge What I Built Think teapot. Think tea. Think Ig Nobel. Think esoteric. Think absolutely useless. Think...Harry Potter?...Professor Trelawney?...divination! Tea leaf reading. For GitHub repos. That's Steep . Paste a public GitHub repo URL. Steep fetches your commit history, file tree, languages, README, and contributors. It finds patterns in the data and maps them to real tasseography symbols, the same symbols tea leaf readers have used for centuries. Mountain. Skull. Heart. Snake. Teacup. Then Madame Steep reads them. Madame Steep is an AI fortune teller powered by the Gemini API. She trained at a prestigious academy (she won't say which) and pivoted to software divination when she realized codebases contain more suffering than any teac

Stop Explaining Your Codebase to Your AI Every Time
Every conversation with your AI starts the same way. "I'm building a Rails app, deployed on Hetzner, using SQLite..." You've typed this a hundred times. Your AI is smart. But it has no memory. Every chat starts from zero. Your project context, your conventions, your past decisions — gone. What if your AI already knew all of that? Here are five notes that make that happen. 1. Your stack, saved once Write one note with your tech stack, deployment setup, and conventions. Now every conversation starts with context. Now ask: "Write a background job that syncs user data to Stripe." Your AI reads the note. It knows it's Rails, knows you use Solid Queue, knows your conventions. No preamble needed. 2. Error fixes you'll hit again You spend 45 minutes debugging a Kamal deploy. You find the fix. A we
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models

Stop Explaining Your Codebase to Your AI Every Time
Every conversation with your AI starts the same way. "I'm building a Rails app, deployed on Hetzner, using SQLite..." You've typed this a hundred times. Your AI is smart. But it has no memory. Every chat starts from zero. Your project context, your conventions, your past decisions — gone. What if your AI already knew all of that? Here are five notes that make that happen. 1. Your stack, saved once Write one note with your tech stack, deployment setup, and conventions. Now every conversation starts with context. Now ask: "Write a background job that syncs user data to Stripe." Your AI reads the note. It knows it's Rails, knows you use Solid Queue, knows your conventions. No preamble needed. 2. Error fixes you'll hit again You spend 45 minutes debugging a Kamal deploy. You find the fix. A we




Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!