Categories: Cyber Security News

US Military Reportedly Used Claude in Iran Strikes Despite Trump’s Ban

The U.S. Department of Defense deployed Anthropic’s Claude AI during Operation Epic Fury, a joint offensive with Israel against Iran on February 28, just hours after President Trump designated Anthropic as a national security “supply chain risk” and ordered all federal agencies to cease use of its AI systems.

On February 28, 2026, American and Israeli forces launched a coordinated strike campaign codenamed Operation Epic Fury/Roaring Lion against key Iranian government installations, including nuclear facilities and strategic military infrastructure.

Sponsored

According to reports from The Wall Street Journal, Axios, and Reuters, the U.S. Central Command leveraged Anthropic’s Claude AI model during the operation for intelligence assessments, target identification, and battlefield simulations.

The disclosure is significant given that the strikes were executed mere hours after the Trump administration formally declared Anthropic a supply-chain risk, a national security-level designation that was intended to immediately curtail the company’s access to defense contracts.

Defense officials reportedly acknowledged that a full technical withdrawal from Claude was operationally infeasible on such short notice, as Claude remains the only AI system currently embedded within certain classified U.S. government networks.

US Military Reportedly Used Claude

The clash between Anthropic and the Pentagon accelerated following the January 2026 operation that led to the capture of Venezuelan President Nicolás Maduro, a mission during which Claude was also deployed, according to Axios. That revelation ignited the showdown between the AI developer and U.S. defense leadership.

The core dispute centers on Anthropic’s acceptable use policy, which includes hard prohibitions against using Claude for:

  • Autonomous weapons systems
  • Mass surveillance of U.S. citizens

Pentagon officials pushed for unrestricted use of the model under the argument that operational realities could not be constrained by a contractor’s commercial ethical policies.

On Thursday, February 26, Anthropic CEO Dario Amodei publicly reiterated the company’s refusal to lift these restrictions, triggering escalating pressure from the Department of Defense.

On Friday, February 27, Secretary of Defense Peter Hegseth announced that Anthropic would be designated a supply-chain risk, but added that the company would be given up to six months to allow for a “seamless transition,” effectively acknowledging the Pentagon’s deep integration of Claude into classified infrastructure.

Roughly four and a half hours after the Pentagon’s announcement, OpenAI CEO Sam Altman posted on X that his company had reached an agreement with the Department of War to deploy its AI models within classified military networks.

Sponsored

Altman noted that the Pentagon demonstrated “deep respect for safety” during negotiations, agreeing to terms that bar the use of OpenAI models for domestic mass surveillance and autonomous weapons systems, nearly identical to those Anthropic had sought.

https://twitter.com/sama/status/2027578652477821175?ref_src=twsrc%5Etfw

Altman further announced that OpenAI would station dedicated safety engineers inside the Pentagon to monitor model behavior, and called on the Department of War to extend equivalent terms to all AI vendors operating in defense environments.

Anthropic has publicly vowed to challenge the “supply-chain risk” designation in court, arguing that its ethical safeguards are not discretionary restrictions but fundamental safety commitments required before deploying frontier AI in high-stakes military contexts.

The company’s position reflects a broader tension across the AI industry where developers seek to maintain usage guardrails but government clients demand unrestricted operational latitude.

Defense One reported that replacing Anthropic’s Claude within Pentagon infrastructure could take three to six months, given its deep integration into classified systems, a timeline that effectively guarantees continued use through much of mid-2026.

Operation Epic Fury may mark the first publicly confirmed instance where AI-assisted military targeting occurred in direct defiance of an active executive prohibition, setting a critical precedent for how AI ethics, defense procurement, and national security policy will intersect in an era of AI-enabled warfare.

Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.

The post US Military Reportedly Used Claude in Iran Strikes Despite Trump’s Ban appeared first on Cyber Security News.

rssfeeds-admin

Recent Posts

Did Live Nation punish a venue by taking Billie Eilish away?

John Abbamondi had orders to let the CEO of Ticketmaster down easy. In April 2021,…

1 hour ago

A new video from the White House mixes Call of Duty footage with actual video of Iran strikes

A screenshot of the Call of Duty footage in the White House’s video. On Wednesday,…

2 hours ago

The T-Mobile Samsung Galaxy S26 Ultra “On Us” Deal Explained (No Trade-In or Port-In Required)

Samsung's newest smartphones - the Galaxy S26, S26+, and S26 Ultra - were recently announced…

3 hours ago

Save 44% Off Hasbro’s Transformers Studio Series Revenge of The Fallen Converting Action Figures

Amazon just launched a Lightning deal that drops the price of the Hasbro Transformers Studio…

3 hours ago

Seven tech giants signed Trump’s pledge to keep electricity costs from spiking around data centers

Trump summoned tech leaders to the White House on Wednesday, March 4, 2026 to sign…

3 hours ago

Tim Sweeney signed away his right to criticize Google until 2032

Epic CEO Tim Sweeney might be one of the most outspoken people in the history…

3 hours ago

This website uses cookies.