Exclusive: Pentagon threatens to cut off Anthropic in AI safeguards dispute
Source: Axios
The Pentagon is considering severing its relationship with Anthropic over the AI firm's insistence on maintaining some limitations on how the military uses its models, a senior administration official told Axios.
Why it matters: The Pentagon is pushing four leading AI labs to let the military use their tools for "all lawful purposes," even in the most sensitive areas of weapons development, intelligence collection, and battlefield operations. Anthropic has not agreed to those terms, and the Pentagon is getting fed up after months of difficult negotiations.
Anthropic insists that two areas remain off limits: the mass surveillance of Americans and fully autonomous weaponry.
The big picture: The senior administration official argued there is considerable gray area around what would and wouldn't fall into those categories, and that it's unworkable for the Pentagon to have to negotiate individual use-cases with Anthropic or have Claude unexpectedly block certain applications.
-snip-
Read more: https://www.axios.com/2026/02/15/claude-pentagon-anthropic-contract-maduro
And if THAT doesn't tell you why the Trump regime is so cozy with AI companies - and how foolish it is to use genAI voluntarily (a key part of that surveillance will be every single thing you're foolish enough to tell AI) - I don't know what will.
James48
(5,139 posts)Maybe three, if we are lucky, before there is a major incident where automatic robots begin killing humans in large numbers.
Its no longer science fiction. Its about to become self-aware. Once you have robots building and programming robots, its going to be over for humans.
Polybius
(21,656 posts)Only then will they be truly self-aware.
Miguelito Loveless
(5,565 posts)Also, lawful is very subjective.
Renew Deal
(84,787 posts)groundloop
(13,666 posts)Gee, what could possibly go wrong when AI is allowed to start firing missiles?
Renew Deal
(84,787 posts)And pretty close to that for law enforcement.
