Human Rights & Tech Justice Organizations Urge Court to Take into Account Big Tech’s Impact on Civilians in War in Amicus in Anthropic v. DOW

March 16, 2026

San Francisco, CA—Anthropic’s “red line” on the Trump Administration’s use of its Artificial Intelligence product “Claude” to enable fully autonomous lethal warfare overlooks the unlawful danger posed to civilians by the U.S. Department of War’s (“DOW”) use of its semi-autonomous AI, argues an amicus brief in support of neither party filed Friday. Anthropic v. DOW focuses narrowly on whether DOW can punish Anthropic for refusing to allow its AI to be used for lethal autonomous weapons in warfare and mass surveillance of Americans. Four organizations filed an amicus brief highlighting how, even when deployed in cases other than fully autonomous weapons, Anthropic’s Claude is already assisting DOW to commit war crimes and other violations in Iran. The brief argues that any decision in the case should underscore the fundamental humanitarian and human rights law principles that constrain both Anthropic and DOW.

“The human rights impact of DOW’s AI use is far greater than Anthropic admits,” said Maddy Batt, Legal Fellow at Tech Justice Law. “Anthropic’s proposed limits permit both the mass civilian death we have seen in Gaza and Iran, and mass surveillance of non-citizens in the U.S. When it comes to AI safety—and international law—we cannot allow Anthropic’s meager restrictions to set the bar.”

Amici—Abolitionist Law Center, Access Now, the Center for Constitutional Rights, and Tech Justice Law—explain how AI is being used in armed conflict to further DOW Secretary Pete Hegseth’s “speed wins” ethos, which fails to meaningfully assess civilian harm. AI allows the military to analyze vast amounts of surveillance and intelligence data and then near-instantaneously recommends targets to strike. This capability compresses the “kill chain”—the process of identifying someone as a combatant and therefore legitimate target for attack, tracking their movements, and ultimately killing them—from weeks or days to mere seconds. Even with human beings making the final decision to kill, when AI-enabled bombing proceeds “quicker than the speed of thought,” these decisions are little more than rubber stamps on an AI tool which Anthropic itself has admitted should not determine who lives and who dies. The result has been devastating loss of civilian life in the AI-enabled assaults on Gaza and Iran; in Iran, where DOW is using Anthropic’s Claude for its bombing campaign, over 1,200 civilians are already estimated to have been killed.

“This devastating and ruthless acceleration of the ‘kill chain’ we are witnessing unfold—at a speed and scale beyond human capabilities—is powered by AI, including Anthropic’s Claude, and exemplifies the grave threat to human life, and breaches of human rights and international humanitarian law posed by militarized AI, even without full autonomy,” said Sadaf Doost, Staff Attorney and Human Rights Program Manager at Abolitionist Law Center.

The brief explains that deploying AI in this way facilitates war crimes, if not crimes against humanity. Even without full autonomy, Amici argue, the near-immediate kill chains enabled on a mass scale by AI violate several cardinal principles of international humanitarian law: the principle of distinction, which requires militaries to distinguish between civilians and combatants in armed conflict; the principle of proportionality, which prohibits military operations that cause civilian deaths or harm disproportionate to the military objective; and the principle of necessity, which allows militaries to use only the means and methods necessary to achieve the legitimate purpose of an attack.

“What we are witnessing now in Iran, in Lebanon and over the last two and a half years in Israel’s ongoing genocidal campaign against Palestinians in Gaza is a fundamental and devastating disregard for human life enabled by Big Tech,” said Katherine Gallagher, Senior Staff Attorney at the Center for Constitutional Rights. “We urge the court to reign in both Big Tech and the Trump administration’s lawlessness before even more lives are lost and communities destroyed.”

As the brief notes, DOW’s deployment of Claude for war crimes puts Anthropic, its corporate officers, and U.S. military and civilian officials at risk of domestic, foreign, and potentially international prosecution.

“Neither a tech company like Anthropic nor the Department of War gets to define for themselves the safe or lawful use of AI-assisted targeting in war. Human lives cannot be reduced to algorithmic recommendations on a battlefield, and artificial intelligence cannot be used to obscure responsibility for unlawful violence. Both the U.S. government and U.S. tech companies remain bound by the law—including the fundamental prohibitions on war crimes,” said Michael De Dora, U.S. Advocacy Manager at Access Now.

The brief was filed on March 13, 2026, in the Northern District of California. Anthropic v. DOW is case number 3:26-cv-01996.