
Written by: Lexi Phillips

On March 5, 2026, the Trump Administration designated Anthropic, the AI company behind Claude, a widely used family of AI assistants, as a national security “supply chain risk,” barring it from federal contracting. Four days later, on March 9, Anthropic filed suit in federal court in California, arguing the designation was unlawful and violated its First and Fifth Amendment rights. The case is poised to become a landmark test of executive authority over domestic AI companies.
Background
Until recently, Anthropic was a key partner in the federal AI landscape. In July 2025, the Department of Defense (DoD) awarded the company a $200 million contract through its Chief Digital and Artificial Intelligence Office, making Anthropic’s Claude models the first frontier AI systems cleared for classified military use. Claude was subsequently deployed by U.S. military and intelligence personnel for analytical and operational support, including in “Operation Absolute Resolve” earlier this year to capture Nicolás Maduro, Venezuela’s former president.
The relationship soured in the fall of 2025 during negotiations over the Pentagon’s GenAI.mil platform, a government-certified AI hub designed to provide frontier AI tools to military personnel, civilians, and contractors across the DoD. During those talks, the DoD asked Anthropic to allow the agency to use the technology for “any lawful use,” rather than under the company’s standard terms. Anthropic largely agreed but drew firm lines: Claude would not be used for autonomous lethal weapons or mass domestic surveillance. Those concerns intensified after Operation Absolute Resolve, when U.S. officials pushed to expand Claude’s military uses into precisely those areas. Anthropic CEO Dario Amodei, who has long called such uses “illegitimate” and “prone to abuse,” reportedly told Defense Secretary Pete Hegseth that the company would not comply, while Hegseth insisted on access for all lawful purposes. The standoff proved irreconcilable.
The Designation
In March 2026, following the disagreement, the DoD formally designated Anthropic as a “supply chain risk” under Section 3252 of the U.S. Code, effectively blacklisting it from government activities. The statute allows the Defense Secretary to exclude companies from certain contracts to prevent adversaries from “sabotag[ing], maliciously introduc[ing] unwanted function,” or otherwise “subvert[ing]” military information systems. Although rarely invoked, the provision has historically been applied only to foreign companies with alleged ties to hostile governments. Applying it to an American company incorporated in the U.S.—and one that had, until recently, been a trusted defense partner—is unprecedented. The Administration also directed all federal agencies to cease using Anthropic systems and warned that continued use could trigger contractual penalties.
The effects of the designation are significant. According to Anthropic, it stands to lose billions in 2026 revenue and faces substantial reputational harm, as federal agencies and contractors may avoid its tools outside the defense context.
The Lawsuit
Anthropic’s complaint raises three central claims.
First, the company argues that the designation violates the First Amendment. Anthropic contends that the DoD did not blacklist it because of a genuine security risk, but because of its expressed views on the ethical limits of AI in warfare. In posts cited by the complaint, President Trump referred to Anthropic as a “RADICAL LEFT WOKE COMPANY,” which the company suggests reveals that the decision was driven by hostility toward its viewpoint rather than legitimate national security concerns
Second, Anthropic argues that the designation violated its Fifth Amendment right to due process. The DoD imposed severe economic consequences, effectively barring it from participation in government contracts, without providing notice, factual findings, or any meaningful opportunity to contest the decision. The complaint describes the action as imposing “draconian punishments” absent any procedural safeguards.
Third, Anthropic alleges that the designation violates the Administrative Procedure Act, which authorizes courts to set aside agency actions that are “arbitrary, capricious, an abuse of discretion, or otherwise not in accordance with the law.” The companypoints to two inconsistencies. First, Section 3252 was designed to guard against infiltration or sabotage by foreign adversaries, yet Anthropic is a U.S. company with no alleged ties to any hostile government. Second, even as the Pentagon labeled Claude a national security threat, it continued deploying Anthropic systems in active military operations, including strikes on Iran as recently as last month. According to the complaint, Secretary Hegseth himself praised Claude as “exquisite” technology the DoD would “love” to work with.
Judicial Outlook
The government’s defense is expected to rely on judicial deference to the executive branch on matters of national security. Courts have repeatedly upheld broad executive discretion in defense procurement, and the Pentagon may argue that Anthropic’susage restrictions amount to a vendor inserting itself into the military chain of command, a concern courts have historically taken seriously. Officials have framed the dispute not as a retaliation for speech, but as a question of operational control: whether a private company can contractually limit how the military deploys technology in active combat.
National security law experts have suggested that Anthropic may have a strong case. Section 3252 contemplates exclusion as a last resort measure to guard against genuine risks of sabotage or subversion, not as a mechanism for pressuring U.S. companies to abandon internal policy commitments. The government’s simultaneous reliance on, and campaign against, Anthropic’s technology creates a contradiction that courts may find difficult to reconcile.
The outcome will likely depend on how broadly courts are willing to read executive authority when national security is invoked. At its core, the lawsuit asks how far the executive branch can go in compelling or penalizing AI developers whose safety policies conflict with military objectives, a question with implications that extend well beyond Anthropic.
Sources:
10 U.S.C.A. § 3252 (West 2024)
5 U.S.C.A. § 706 (West 2013)
Amrith Ramkumar, Keach Hagey & Vera Bergengruen, Pentagon used Anthropic’s Claude in Maduro Venezuela Raid, Wall St. J. (Feb. 15, 2026 4:32 PM).
Anthropic and the Department of Defense to advance responsible AI in defense operations, Anthropic (July 14, 2025).
Ashley Capoot & Jonathan Vanian, Anthropic was the Pentagon’s choice for AI. Now it’s banned and experts are worried, CNBC (Mar. 9, 2026, 6:01 PM).
Bobby Allyn, Hegseth threatens to blacklist Anthropic over ‘woke AI’ concerns, NPR (Feb. 24, 2026, 8:24 PM).
Complaint, Anthropic PBC v. U.S. Dep’t of War, No. 3:26-cv-01996 (N.D. Cal. Mar. 9, 2026).
Donald J. Trump (@realDonaldTrump), Truthsocial (Feb. 27, 2026 at 3:47 PM).
How the Anthropic-Pentagon dispute over AI safeguards escalated, Reuters (Mar. 11, 2026, 4:35 PM).
Jack Queen & Deepa Seetharaman, Anthropic sues to block Pentagon blacklisting over AI use restrictions, Reuters (Mar. 9, 2026, 11:04 AM).
Matt O’Brien & Konstantin Toropin, Trump orders federal agencies to stop using Anthropic tech over AI safety disputes, PBS (Feb. 27, 2026, 7:10 PM).
Matthew Olay, Trump Announces Military Capture of Maduro, U.S. Dep’t of War: Pentagon News (Jan. 3, 2026).
Jack Queen, Anthropic has a strong case against Pentagon blacklisting, legal experts say, Reuters (Mar. 11, 2026, 6:04 AM).
Shannon Bond & Geoff Brumfiel, OpenAI announces Pentagon deal after Trump bans Anthropic, NPR (Feb. 28, 2026, 12:03 AM).
The War Department Unleashes AI on New GenAI.mil Platform, U.S. Dep’t of War (Dec. 9, 2025).