On March 13, 2026, Abolitionist Law Center, Access Now, the Center for Constitutional Rights, and Tech Justice Law filed an amicus brief in Anthropic v. DOW highlighting the U.S. Department of War’s (“DOW,” or the Defense Department) use of Anthropic’s Artificial Intelligence (“AI”) product “Claude” in committing war crimes in the U.S.’s “Operation Epic Fury” in Iran, initiated February 28, 2026.
While Anthropic v. DOW seeks redress for the Trump administration’s retaliation against Anthropic for the company’s “red line” on using Claude for fully autonomous lethal warfare and domestic mass surveillance, our brief asserts that this claim overlooks how even the use of semi-autonomous AI, which is being used in armed conflict to further DOW Secretary Pete Hegseth’s “speed wins” ethos, fails to meaningfully assess civilian harm. AI allows the military to analyze vast amounts of surveillance and intelligence data and then near-instantaneously recommends targets to strike. This capability compresses the “kill chain” – the process of identifying someone as a combatant and therefore legitimate target for attack, tracking their movements, and ultimately killing them—from weeks or days to mere seconds. Even with human beings making the final decision to kill, when AI-enabled bombing proceeds “quicker than the speed of thought,” these decisions are little more than rubber stamps on an AI tool which Anthropic itself has admitted should not determine who lives and who dies. The result has been devastating loss of civilian life in the AI-enabled assaults on Gaza and Iran; in Iran, where DOW is using Anthropic’s Claude for its bombing campaign, over 1,200 civilians are already estimated to have been killed.
We argue that any decision in the case should underscore the fundamental humanitarian and human rights law principles that constrain both Anthropic and DOW from using AI technology to perpetrate mass casualties.