Anthropic, the Pentagon, and the Question Nobody Wants to Answer

Anthropic, the Pentagon, and the Question Nobody Wants to Answer

By Dr David Bell, Specialist Anaesthetist (Retired), Software Engineer, and Founder of Align AI Fitness, NSW, Australia


Three weeks ago, the company that builds the AI I use every day was blacklisted by the United States government. Not for fraud. Not for espionage. For refusing to remove two safety restrictions from its military AI contract.

Anthropic, the maker of Claude, told the Pentagon it would not allow its AI to operate autonomous weapons or conduct mass surveillance of American citizens. The Pentagon told Anthropic to drop those restrictions or lose all federal contracts. Anthropic said no. Within 24 hours, Defence Secretary Pete Hegseth designated it a "supply chain risk to national security" (the first American company to receive that label), Trump posted on Truth Social ordering every federal agency to stop using Anthropic products immediately, and OpenAI announced a replacement deal.

I have been watching this unfold from Sydney with something between admiration and exasperation. Admiration because Anthropic's position is genuinely principled. Exasperation because it was also genuinely naive.


What Claude Was Actually Doing

Before we get to the ethics, it helps to understand what we are actually arguing about. So let me walk you through it.

Since late 2024, Claude has been the language engine inside Palantir's Maven system, the US military's primary AI platform for intelligence analysis and operational planning. Consider a hypothetical military intelligence analyst sitting at a terminal in Central Command. In front of her is a chatbot interface, not unlike the one I use to write code. She types a query. Within seconds, Maven pulls classified feeds from satellites, drone reconnaissance, archived intelligence reports, and fuses them into a single operational picture.

She asks for strike options on a set of coordinates. Claude generates three distinct action plans, each with troop routes, electronic jamming positions, aerial asset coordination, and estimated casualties. It attaches GPS coordinates for each target. It recommends specific weapons. And then it generates what the military calls an "automated legal justification" for each strike, a prewritten argument for why the attack complies with the laws of armed conflict.

That last part is worth sitting with. The AI writes the legal case for killing people. A human still approves it, for now, but the justification itself is machine-generated.

The scale of what this replaces is staggering. Workflows that previously required around 2,000 intelligence officers can reportedly be handled by about 20. The system does not sleep. It does not get tired. It does not miss the pattern in the fourth hour of a twelve-hour shift.

In January this year, Claude was used via Maven in the operation to capture Venezuelan president Maduro. According to one account, a senior Anthropic executive contacted Palantir to ask whether Claude had been involved (other reporting suggests Palantir raised Claude's role first during a routine call). Either way, Palantir reported the exchange to the Pentagon. That conversation, more than anything else, is what started the chain of events that followed.

Then came Operation Epic Fury. Beginning 28 February, the US and Israel launched more than 5,000 strikes on Iranian military infrastructure, more than a thousand in the first 24 hours alone. That is more than double the air power deployed in the opening phase of the 2003 Iraq invasion. Maven, with Claude still running inside it (the six-month phase-out had not expired), helped generate and prioritise those targets.

On that first day, between 10:23 and 10:45 a.m. local time, a Tomahawk cruise missile hit the Shajareh Tayyebeh girls' elementary school in Minab, southern Iran. Classes were underway. Saturday is a working day in Iran. The roof collapsed on the students. At least 110 children died, aged seven to twelve, along with 26 teachers and four parents. It was the deadliest single strike for civilian casualties in the entire campaign.

The Washington Post reported the school was on a US target list, likely misidentified as a military site due to outdated intelligence data. It sits adjacent to a naval base. The Pentagon has refused to say whether AI was involved in selecting it as a target. Semafor reported that "humans, not AI, are to blame," pointing to analysts who built the target lists using outdated satellite imagery. Anthropic said Claude "does not directly offer targeting recommendations" and is a "decision support system."

That distinction sounds reasonable until you think about what it actually means. The AI did not pick the school. But the AI processed the flawed data, prioritised it, accelerated it, and compressed what used to take days of human review into minutes. The error was human. The speed that made it uncatchable was not.

121 House Democrats have since sent a formal letter to Hegseth demanding answers on AI's role in target selection. The deadline for his response is tomorrow. Amnesty International has called for accountability. Human Rights Watch wants it investigated as a war crime. Iran is converting the school into a museum.

This is not theoretical. This is not a policy paper. Claude is in the kill chain, right now, today. And 110 children are dead.


Anthropic's Two Red Lines

Anthropic's position has been consistent throughout: no fully autonomous weapons (meaning no lethal targeting without a human authorising the strike), and no mass domestic surveillance of US citizens.

Dario Amodei, Anthropic's CEO, framed this as a reliability argument rather than a moral one. AI is not reliable enough to operate weapons systems autonomously. No laws or regulations yet exist to govern how AI could be used for mass surveillance, so there is no framework to ensure it would be done responsibly.

"We cannot in good conscience accede to their request," Amodei said on 26 February.

The Pentagon's response, issued through its CTO, was that it is "not democratic" for a private company to limit how the military uses its tools. Hegseth's January memo demanded that all Department of War AI contracts include "any lawful use" language within 180 days, which would strip both of Anthropic's restrictions.

Anthropic's position is admirable. It was also always going to end this way.


The Problem with Principled Stands

Here is the thing that bothers me.

Anthropic had a seat at the table. It had Claude inside the most important military AI system on the planet. It had influence over how that system was used. Its two red lines, no autonomous weapons, no mass surveillance, were not unreasonable. They were the minimum ethical floor for any AI deployment in a military context.

And now it has none of that. It has a lawsuit (filed 9 March in a California court), nearly 150 retired judges filing amicus briefs in its support, and a product that the world's largest military has been ordered to stop using.

Could Anthropic have achieved more by staying inside the tent? Not by dropping its restrictions, but by finding a way to maintain them without triggering the kind of confrontation that was always going to end with this administration reaching for the biggest stick it could find. Because the restrictions themselves were not the problem. The phone call to Palantir was.

Anthropic should not have looked the other way. But the moment you make a powerful institution feel like you are trying to audit them, they will find a way to remove you. And once you are removed, you have no influence at all. Claude is still running inside Maven right now because the phase-out has not expired. By the time it does, OpenAI will have filled the gap. The red lines will be gone. And Anthropic's principled stand will have achieved exactly what principled stands usually achieve: a clear conscience and no practical effect.


OpenAI Steps In (With Weasel Words)

Hours after Anthropic's blacklisting, OpenAI announced its own Pentagon deal. The blog post was titled "Our agreement with the Department of War." The timing was, to put it charitably, convenient.

OpenAI published its own set of red lines. On paper, they look similar to Anthropic's. No mass domestic surveillance. No autonomous weapons. No high-stakes automated decisions.

The detail tells a different story.

The surveillance clause states the AI "shall not be intentionally used for domestic surveillance" and requires compliance with Executive Order 12333. The word "intentionally" is doing a lot of heavy lifting there. The autonomous weapons clause states the AI "will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control." Read that again. It only restricts what is already restricted by existing law. It adds nothing.

The Electronic Frontier Foundation called these protections "weasel words." MIT Technology Review ran the headline: "OpenAI's 'compromise' with the Pentagon is what Anthropic feared." The Intercept went with: "OpenAI on Surveillance and Autonomous Killings: You're Going to Have to Trust Us."

CNN reported that OpenAI staff were "fuming" internally and "really respect" Anthropic's position. Amodei's reaction, in a leaked internal memo, was less diplomatic: he called OpenAI's deal "safety theatre" and its public messaging "straight up lies." He later apologised for the tone. Sam Altman, to his credit, admitted the deal "looked opportunistic and sloppy" and amended the contract.

Neither company deserves to be cast as hero or villain here. Both are trying to appear righteous, and both have an underlying commercial motivation. Anthropic is a company. It has investors. It has an obligation to those investors, same as OpenAI. The difference is that Anthropic's commercial interests happened to align with a principled position, and OpenAI's did not. If the commercial incentives were reversed, the positions would be too. That is not cynicism. That is how companies work.


The Temper Tantrum

Designating Anthropic a "supply chain risk to national security" for refusing to remove two safety restrictions is not a measured policy decision. It is a temper tantrum.

This is consistent with how this administration operates. The Associated Press had its White House access restricted for refusing to use "Gulf of America" instead of "Gulf of Mexico." The administration threatened "every tool at its disposal" against the EU for regulating American tech firms. Disney and Meta paid multimillion-dollar settlements after Trump lawsuits. CBS self-censored while seeking merger approval. A company says no, so the government brands it a national security threat. This is not statecraft. This is what it looks like when the most powerful country on earth responds to disagreement the way a teenager responds to being told no.

And then there is the rename. Trump signed an executive order in September 2025 turning the Department of Defense into the Department of War (only Congress can formally rename it, so it functions as a "secondary title," which is to say, branding). Hegseth said the US "hasn't won a war since" World War II and that the country will "go on offense, not just on defense" and "raise up warriors, not just defenders." The website is now war.gov. The rebranding is estimated to cost between US$10 million and US$125 million.

The rename is worth pausing on. Language shapes thinking. A department that calls itself the Department of War will approach decisions differently from one that calls itself the Department of Defense. When you frame AI ethics questions inside a department whose explicit posture is offensive rather than defensive, the answers you get will be different. Anthropic's red lines might have been tolerable to a Department of Defense. They were never going to survive a Department of War.


Why This Is Not Just an American Problem

It would be comfortable to watch all of this from Australia and treat it as someone else's argument. We cannot.

Australia is a founding member of AUKUS. Under Pillar II, AI is one of the core technology areas alongside quantum computing, cyber capabilities, and hypersonics. AUKUS partners have already conducted joint AI trials, including live retraining of AI models in flight and interchanging AI models on uncrewed aerial vehicles across nations. AI has been incorporated into anti-submarine sonobuoy processing on P-8A aircraft used by all three AUKUS nations.

We are building the Ghost Bat, an autonomous combat aircraft program with billions already committed and acquisition running through to 2040. We are a core member of Five Eyes. We are embedded in the same intelligence infrastructure that runs Palantir and Maven.

And our official position on lethal autonomous weapons? We do not support a ban. Australia's stated policy is "responsible use of AI in the military domain" with "human involvement throughout the lifecycle." Human Rights Watch has criticised us for failing to act on what it calls "killer robots." Our policy guidance on autonomous systems dates to 2020 and needs updating.

So when Anthropic draws a line on autonomous weapons and the Pentagon says that line is unacceptable, Australia is not a spectator. We are a customer of the same systems, a partner in the same alliance, and a country that has deliberately avoided taking a firm position on the exact question that just blew up the biggest AI contract in US history. How long do we plan to avoid that question? Because the Americans just answered it for us, and we might not like their answer.


The Uncomfortable History

The other thing that nags at me is the history of military technology.

GPS started as a US Department of Defense project in the 1970s for precision navigation and targeting. Reagan authorised civilian access in 1983. Now it runs Google Maps, ride-sharing, fitness trackers, and precision agriculture. The internet began as ARPANET in 1969, whose distributed architecture drew on ideas originally designed to survive a nuclear attack. The EpiPen was adapted from a military autoinjector designed to deliver nerve agent antidote. Aviation was adopted by the US Army shortly after the Wright brothers' first flight, and military contracts drove its development for decades.

As a doctor, the EpiPen one hits differently. The autoinjector that saves lives in allergic emergencies exists because the military needed to inject atropine into soldiers exposed to nerve gas. I have used that technology. I have seen it save people. The lineage is uncomfortable, but it is real.

Almost every transformative civilian technology of the last century has military funding somewhere in its family tree. Refusing to engage with military applications of AI does not stop those applications from happening. It just means someone else builds them, with fewer guardrails, and the civilian benefits that might have flowed from a more careful approach never materialise.

It is genuinely naive to boycott a military for using the latest technology when the history of technology is, in large part, the history of military investment. That does not mean we should hand over the keys without conditions. It means the conditions matter more than the refusal.


Where Does This Leave Us?

The case is ongoing. Anthropic's lawsuit is in a California court. nearly 150 retired judges and Microsoft have filed in support. The Trump administration defended the blacklisting in court yesterday. Claude is still running inside Maven because the phase-out window has not closed. OpenAI is being onboarded as the replacement.

Anthropic will most likely lose. Not because it is wrong, but because governments do not respond well to private companies telling them what they can and cannot do with tools they have already paid for. And this particular government has shown, repeatedly, that it views any form of resistance as an invitation to escalate.

The question after that will not be whether AI is used in warfare. That ship sailed the moment Palantir put Claude on a classified network. The question will be whether anyone is in a position to insist on the kind of restrictions that Anthropic tried to maintain.

Right now, the answer is no.

Consider the analyst at her terminal again. She types a query. Maven pulls the feeds, generates the targets, writes the legal justification, recommends the weapon. A human reviews it, clicks approve. The whole process, from question to detonation, compressed from days into minutes. And if one of those targets was built on outdated data, if it was actually a school, the system moved too fast for anyone to catch it.

Anthropic built this. OpenAI will build the next version. And the restrictions that might have shaped how it was used are gone, because the one company that tried to impose them got thrown out of the building for asking whether its own product was being used to kill people.

The technology is extraordinary. The governance is not even close. And in Minab, 110 families know exactly what that gap looks like.


This is the second in a series on the AI acceleration and what it means for the industry, the economy, and the people caught in the middle.

Read more