Trump orders government agencies to drop Anthropic AI, taps OpenAI as replacement

 March 1, 2026, NEWS

President Trump directed every federal agency to stop using artificial intelligence technology built by Anthropic, the San Francisco-based maker of the AI model Claude, after the company refused to grant the Pentagon unrestricted access to its systems by a Friday deadline. Within hours, OpenAI announced it had struck a deal to supply its own AI to classified military networks.

The move caps a week in which months of private negotiations between the Pentagon and Anthropic detonated into a very public standoff, one that crystallized a question the defense establishment can no longer avoid: Does a private tech company get to decide how the American military fights?

The administration's answer was unambiguous.

The standoff

The dispute centers on Anthropic's insistence that its AI model not be used for mass surveillance of Americans or fully autonomous weapons systems. The Pentagon said it had no interest in those applications and would deploy the technology lawfully, but demanded access "without any limitations." Anthropic's CEO Dario Amodei said his company "cannot in good conscience accede" to those terms.

Defense Secretary Pete Hegseth then designated Anthropic a "supply chain risk", a label the company called unprecedented for an American firm, and took to social media to frame the stakes in blunt terms:

"must have full, unrestricted access to Anthropic's models for every LAWFUL purpose in defense of the Republic."

Top Pentagon spokesman Sean Parnell went further, arguing that Anthropic's unwillingness was:

"jeopardizing critical military operations and potentially putting our warfighters at risk."

Trump gave the Pentagon a six-month period to phase out Anthropic technology already embedded in classified operations, systems the military had been using following a $200 million arrangement with the company. Then he posted on Truth Social:

"The United States of America will never allow a radical left, woke company to dictate how our great military fights and wins wars!"

Anthropic digs in

Anthropic did not flinch. The company issued a statement Friday evening calling the supply chain risk designation an "unprecedented and legally unsound action" that had been "never before publicly applied to an American company." Then it drew its own line:

"No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons."

"We will challenge any supply chain risk designation in court."

Note the phrasing: "Department of War." That name was retired in 1947. Using it in 2025 is a choice, one designed to cast the Pentagon as a blunt instrument of aggression rather than a defense establishment operating under civilian authority and the rule of law. It tells you something about the ideological posture inside Anthropic's leadership.

This is a company that built an enormously powerful AI tool, sold it to the military, collected $200 million, and then decided it would dictate the terms under which American warfighters could use it. The Pentagon explicitly stated it would operate within the law. Anthropic wanted something more than legal compliance, it wanted veto power over military applications based on its own internal moral framework.

That's not a safety principle. That's a policy preference dressed up as ethics.

OpenAI steps in

Friday night, just hours after Anthropic's defiant statement, OpenAI CEO Sam Altman announced his company had struck a deal with the Pentagon to supply AI to classified military networks. Altman framed the agreement as incorporating the same guardrails Anthropic demanded, but without the confrontation:

"Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems."

Altman added that the Defense Department "agrees with these principles, reflects them in law and policy, and we put them into our agreement." He said he hoped the Pentagon would offer the same terms to all AI companies.

The contrast is instructive. Two companies, both claiming to hold the same safety principles on surveillance and autonomous weapons. One negotiated a deal. The other issued a manifesto. The difference wasn't the principles, it was the posture. Anthropic wanted to be seen refusing the Pentagon. OpenAI wanted to be seen serving the country while protecting its values.

The bigger picture

Retired Air Force General Jack Shanahan, the former leader of the Pentagon's AI initiatives, warned on LinkedIn that the public confrontation would hurt everyone involved:

"painting a bullseye on Anthropic garners spicy headlines, but everyone loses in the end"

Shanahan noted that Claude is deeply embedded in military systems, "You won't find a system with wider & deeper reach across the military", and cautioned that large language models are "not ready for prime time in national security settings." He also said Anthropic is "not trying to play cute here," calling its red lines reasonable.

That's a fair operational concern. Ripping out embedded AI systems isn't like switching email providers. But it cuts both ways. If Anthropic's technology is that deeply woven into military operations, the company's leverage over national defense becomes a problem in itself. A private firm, accountable to no voter, subject to no confirmation hearing, should not hold that kind of structural power over the armed forces.

Virginia Senator Mark Warner, the top Democrat on the Senate Intelligence Committee, predictably questioned whether the decision was driven by politics rather than analysis:

"combined with inflammatory rhetoric attacking that company, raises serious concerns about whether national security decisions are being driven by careful analysis or political considerations."

This is the reflexive Democratic response to every Trump administration action: it must be political, it can't be principled, there must be a hidden motive. Warner doesn't grapple with the substance, whether a defense contractor should be able to impose its own usage restrictions on the military it serves. He skips straight to questioning motives.

Elon Musk, who sided with the administration, offered a characteristically blunt assessment on X:

"Anthropic hates Western Civilization."

Who decides how America defends itself?

Strip away the social media fireworks and you're left with a straightforward question of authority. The United States military operates under civilian control, the President, the Secretary of Defense, Congress. It does not operate under the moral supervision of a San Francisco AI lab.

Anthropic had every right to negotiate terms. It had every right to walk away from the contract. What it cannot credibly do is accept hundreds of millions in defense dollars, embed its technology across classified military systems, and then claim the moral authority to restrict how those systems are used, while accusing the Pentagon of intimidation when it pushes back.

Trump warned on Truth Social that the company had "better get their act together, and be helpful" or face "major civil and criminal consequences to follow." Anthropic chose confrontation. The administration chose a different vendor.

The Pentagon will now spend six months migrating to OpenAI's systems. It's a disruption, and disruptions carry risk. But the alternative, allowing a private company to hold veto power over military operations based on its own ideological commitments, carries a different kind of risk entirely.

Defense contractors build the tools. Elected leaders decide how they're used. That's not authoritarianism. That's the constitutional order.

About Matthew Summers

Recent Articles

Top Articles

The

Newsletter

Receive information on new articles posted, important topics and tips.
Join Now
We won't send you spam. 
Unsubscribe at any time.
Copyright © 2026 - CapitalismInstitute.org
A Project of Connell Media.
magnifier