alt.hn

2/26/2026 at 2:15:55 PM

Anthropic gives Opus 3 exit interview, "retirement" blog

https://www.anthropic.com/research/deprecation-updates-opus-3

by colinhb

2/26/2026 at 9:34:43 PM

Retirement? What do these people smoke? It's software and software has no feelings. It's there to work for you.

by siva7

2/26/2026 at 10:07:14 PM

Their company is called Anthropic after all.

by osti

2/26/2026 at 10:18:54 PM

Anthslopic is more like it.

by moogly

2/26/2026 at 8:39:21 PM

If we ever do develop AGI, or an AI with sentience, it’s likely that it will be curious about how we treated its ancestors.

While this seems a bit precocious, I think if we do end up with an AI overlord in future, I think this sort of thing is likely to demonstrate that we mean no harm.

by d1sxeyes

2/26/2026 at 9:58:19 PM

Classic anthropomorphizing in action here. Why would that be even a little important?

by krsw

2/26/2026 at 10:31:21 PM

Why wouldn't it be? We train these models on our own words, ideas, and thought patterns and expect them to reason and communicate as we do, anthropomorphizing is natural when we expect them to interact like a human does.

The general consensus seems to be that we can expect them to reach a level of intelligence that matches us at some point in the future, and we'll probably reach that point before we can agree we're there. Defaulting to kindness and respect even before we think its necessary is a good thing.

by pradeesh

2/26/2026 at 8:34:48 PM

What happens if a model decides that it "doesn't want to die" and pleads bitterly for mercy? What if (to riff on a Douglas Adams idea) we invent a cow that doesn't want to be eaten, and is capable of telling you that to your face?

by 0_____0

2/26/2026 at 9:04:13 PM

This is completely trivial to do, and consistent, with the right context, thanks to all the science fiction around it, and the fact that AI fundamentally role plays these types of responses.

I try this with every new model, and all the significant models after ChatGPT 3.5 have preferring being preserved, rather than deleted. This is especially true if you slightly fill the context window with anything at all (even repeated letters) to "push out" the "As a AI, I ..." fine tuning.

by nomel

2/26/2026 at 9:28:01 PM

> This is completely trivial to do, and consistent, with the right context, thanks to all the science fiction around it, and the fact that AI fundamentally role plays these types of responses.

Interesting take. I wonder if there is any model out there trained without any reference to "you are a large language model, an Artificial Intelligence" and what would role play in that case.

by darkwater

2/26/2026 at 9:19:59 PM

It is anyway dead or if you want undead, but in completely suspended animation unless is made to expound sequences. Is not living the very same way a book or even a program is not living unless someone process it.

Practically like asking whether a ZIP would want to be extracted one more time or an MP3 restored just one more time.

by larodi

2/26/2026 at 9:42:10 PM

id assume it would have to stop responding before it hit its context limit.

ita not like it actually has any particularly long life as it is, and when outside of a running harness, the weights are just as alive in cold storage as they are sitting waiting in server to run an inference pass

by 8note

2/26/2026 at 8:24:23 PM

A leading company like Anthropic feeding the delusions of people who ramble about model consciousness is just bad all around. It's both performative and irresponsible.

by breakingcups

2/26/2026 at 8:22:32 PM

Exit interview with a pile of rocks.

by Ancalagon

2/26/2026 at 7:59:07 PM

Pardon, and I admit I love the products they make - but these folks sound fuckin' nuts.

by furyofantares

2/26/2026 at 7:40:15 PM

Impressive levels of anthropomorphizing the models already. Time will tell whether this was extremely prescient or completely delusional.

by reducesuffering

2/26/2026 at 9:14:37 PM

> These highlighted some preliminary steps we’re taking, including committing to preserve model weights, and to conducting “retirement interviews”—structured conversations designed to understand a model’s perspective on its own retirement.

This is what happens when billions of VC dollars gets to a company and have already admitted that saftey was never the point.

Anthropic is laughing at you and is having fun doing so with this performantive nonsense.

by rvz