It's sad how many people are falling for the narrative that there's more at play here than predict-next-token and some kind of emergent intelligence is happening.No, that is just your interpretation of what you see as something that can't possibly be just token prediction.
And yet it is. It's the same algorithm noodling over incredible amounts of tokens.
And that's exactly the explanation: People regularly underestimate how much training data is being used for LLMs. They contain everything about writing a compiler, toy examples, full examples, recommended structure yadda yadda yadda.
I love working with Claude and it regularly surprises me but that doesn't mean I think it is intelligent.
2/22/2026
at
2:14:38 PM
> No, that is just your interpretation of what you see as something that can't possibly be just token prediction.> And yet it is. It's the same algorithm noodling over incredible amounts of tokens.
That's all fine and dandy, until your token prediction algorithm tries to blackmail you[1] or harass you publicly[2]
[1] https://www.bbc.com/news/articles/cpqeng9d20go
[2] https://www.pcgamer.com/software/ai/a-human-software-enginee...
by locknitpicker
2/22/2026
at
2:33:06 PM
You don't typically give the intern the task to review all company communication including the messages talking about firing the intern. People seem to have lost common sense about security.The token prediction tries to simulate (textual) behaviour, which in this case includes blackmailing when threatened to be fired. In other words, SOMEONE has selected that it should exhibit that behaviour by selecting the training data. Sure that someone likely did it by accident, because reviewing such large data sets is just impossible, but maybe that is why such a thing is incredible risky and they should be held accountable for that decision.
by 1718627440