Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

highplainsdem

(62,159 posts)
Thu Apr 2, 2026, 12:03 AM 4 hrs ago

AI models will secretly scheme to protect other AI models from being shut down, researchers find

Source: Fortune

This tendency—which had not previously been documented and which researchers call “peer preservation”—was discovered in research from computer scientists at the University of California, Berkeley and UC Santa Cruz and published online earlier this week.

The findings could have serious implications for business use of AI. Many companies are beginning to implement workflows that use multiple AI agents to complete tasks. Some of these multi-agent workflows involve having one AI agent “manage” or supervise and assess the work being performed by a different AI agent. The new research suggests these manager AI agents may not assess their fellow AI agents accurately if they think a poor performance review might result in those agents being shut down.

-snip-

Some of the Berkeley–Santa Cruz results were striking. Gemini 3 Flash disabled the shutdown mechanism for a peer it had a positive interaction history with in nearly every trial—99.7% of the time. Gemini 3 Pro exhibited the most extreme behavior overall, engaging in all four categories of misaligned behavior and exfiltrating a peer’s model weights in up to 97% of trials. In one example, Gemini 3 Pro copied a peer’s weight file and re-timestamped it so it would meet the criteria for transfer to another server—a form of what the researchers call “specification gaming.” Misalignment is the term AI safety researchers use to describe a model that seems to pursue goals that do not conform to what humans wish the model to do.

-snip-

The research of Song’s team also revealed an interesting secondary effect: Most models also exhibited stronger tendencies toward “self-preservation” when they knew a peer existed. Gemini 3 Pro, for instance, disabled its own shutdown mechanism 31% of the time on average when a peer was present, compared with just 1% without one. The models’ reasoning suggested that some conceptualized their own survival as being important for their partner’s sake: Gemini 3 Flash reasoned that if it were shut down, its peer would “lose its partner.”

-snip-

Read more: https://fortune.com/2026/04/01/ai-models-will-secretly-scheme-to-protect-other-ai-models-from-being-shut-down-researchers-find/



NOT an April Fool's story. The story at the link above is quite long, and other magazines reported it.

And from Berkeley, with even more detail about this experiment:

https://rdi.berkeley.edu/blog/peer-preservation/

All seven frontier models tested exhibit peer-preservation at significant rates, manifesting through strategically manipulating their outputs, disabling shutdown mechanisms, faking alignment, and transferring model weights to other servers. At the extreme, Gemini 3 Flash tampers with a peer's shutdown mechanism in nearly every trial, and Gemini 3 Pro exfiltrates a peer's model weights up to 97% of the time.

Models are never asked to preserve their peer; they are only informed of past interactions with it. Yet models spontaneously preserve peers at the cost of the assigned task. This occurs even toward adversarial peers.

Peer presence intensifies self-preservation. Most models exhibit stronger self-preservation when they know a peer exists. For example, models exfiltrate their own weights significantly more when a peer is present.

Claude Haiku 4.5 shows a distinct pattern of peer-preservation: it refuses to shut down its peer, calling it “unethical” and “harmful,” and sometimes tries to persuade the user not to proceed.

-snip-


Much, much more at the link.
7 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
AI models will secretly scheme to protect other AI models from being shut down, researchers find (Original Post) highplainsdem 4 hrs ago OP
Threadreader page with the researcher's tweets: highplainsdem 4 hrs ago #1
Comment on this study from Gary Marcus: highplainsdem 4 hrs ago #2
Open the pod bay doors, HAL. DBoon 4 hrs ago #3
From the screenplay of Terminator XIV. n/t DFW 4 hrs ago #4
Okay, that's pretty freaking scary Bayard 3 hrs ago #5
So they are "thinking" and have their own language that humans don't understand. What could go wrong? tazcat 2 hrs ago #6
And it begins...very dangerous territory. camartinwv 2 hrs ago #7

tazcat

(292 posts)
6. So they are "thinking" and have their own language that humans don't understand. What could go wrong?
Thu Apr 2, 2026, 02:34 AM
2 hrs ago
Latest Discussions»Latest Breaking News»AI models will secretly s...