LOADING...
LOADING...

A **federal judge in California** sided with **Anthropic** in its legal battle against the **Department of Defense**, granting a temporary injunction that pauses government punitive measures against the AI company.
The dispute centers on **Anthropic's refusal** to allow the Pentagon to use its **Claude AI model** for autonomous weapons systems and domestic surveillance. Key developments include:
• **Judge Rita Lin** found the government likely overstepped its authority by designating Anthropic as a "supply chain risk" • The judge questioned why the DoD didn't simply drop Anthropic as a contractor rather than attempting to "cripple" the company • **Anthropic argues** the government violated its First Amendment rights and that the designation could cost hundreds of millions in damages • The ruling affects government efforts to replace Claude with other AI tools, as Anthropic's technology is deeply embedded in federal operations
The case highlights tensions over AI ethics and military applications, with Anthropic taking a stand against autonomous lethal weapons despite potential financial consequences.
Face-off is over company’s refusal to let defense department use its Claude AI model in autonomous weapons systems A federal judge in California sided with Anthropic in its case against the Department of Defense on Thursday, ordering a temporary pause on the government’s punitive measures against the artificial intelligence firm. Judge Rita Lin granted Anthropic’s request for a temporary injunction while the northern district court of California hears the company’s case. Anthropic argued that the Department of Defense and Donald Trump violated its first amendment rights in declaring the company a supply chain risk and ordering government agencies to cease using its technology. Continue reading...