2/13/2026 at 5:30:07 PM
I wonder to what extent 4/4o is the culprit, vs it simply being the default model when many of these people were forming their "relationships."by hamdingers
2/13/2026 at 5:41:18 PM
4o had some notable problems with sycophancy being very very positive about the user and going along with almost anything the user said. OpenAI even talked about it [0] and the new responses to people trying to continue their former 'relationship' does tend towards being 'harsh' [1] especially if you were a person actually thinking of the bot as a kind of person.[0] https://openai.com/index/sycophancy-in-gpt-4o/
[1] https://www.reddit.com/r/MyBoyfriendIsAI/comments/1qx3jux/wh...
by rtkwe
2/13/2026 at 5:44:31 PM
It really does give a lot of signal[1] to people in the dating scene: validate and enthusiastically respond to potential romantic partners and the world is your oyster.1. possibly/probably not in a good or healthy way? idk
by kelseyfrog
2/13/2026 at 6:00:38 PM
From the viewpoint of self psychology people are limited in their ability to seduce because they have a self. You can't maintain perfect mirroring because you get tired, their turn-on is your squick, etc. In the early stage of peak ensorcelement (limerence) people don't see the "small signals", they miss the microexpressions, sarcastic leaks, etc. -- they see what they want to see. But eventually that wears out.It can be puzzling that people fall for "romance scams" with people whose voice they haven't even heard but actually it's actually a safer space for that kind of seducer to operate because the low-fi channel avoids all sort of information leaks.
by PaulHoule
2/13/2026 at 7:02:41 PM
Enthusiastically matching the energy of an anxiously attached partner is a rite of passage many would rather not have walked.by fullmoon
2/13/2026 at 9:04:54 PM
That's a pretty fair point to what might explain why AI relationships are so appealing to some people.I'd be a fun observational study to survey folks in AI relations and see if anxious attachments are over-represented.
by kelseyfrog
2/13/2026 at 5:40:32 PM
Anecdotally, 4o's sycophancy was higher than any other model I've used. It was aggressively "chat-tuned" to say what it thought the user wanted to hear. The latest crop of frontier models from OpenAI and others seems to have significantly improved on this front — does anybody know of a sycophancy benchmark attempting to quantify this?by gordonhart
2/13/2026 at 5:42:43 PM
If I worked at OpenAI, I would dial up the sycophancy to lock my users in right before raising subscription prices.by co_king_3
2/13/2026 at 5:46:17 PM
That's... a strategy. Matter of time before an AI companion company succeeds with this by finetuning one of the open-source offerings. Cynically I'm sure there are at least a few VC backed startups already trying thisby gordonhart
2/13/2026 at 5:51:52 PM
Cynically I think Anthropic is on the bleeding edge of this sort of fine-tuned manipulation.Also If I worked for one of these firms I would ensure that executives and people with elevated status receive higher quality/more expensive inference than the peons. Impress the bosses to keep the big contracts rolling in, and then cheap out on the day-to-day.
by co_king_3
2/13/2026 at 5:40:01 PM
It's not that complicated. 4o was RLHF'd to be sycophantic as hell, which was fine until some one had their psychotic episode fueled by it and so they changed it with the next model.by danielbln
2/13/2026 at 5:49:19 PM
Never used 4o in an unhealthy way, but the audio was so much fun (especially for cooking help). Almost essentially quit using AI audio since. Nothing compares.by TIPSIO
2/13/2026 at 5:41:34 PM
I think that's part of it, but then the user perceives "personality changes" when the model changes due to differences in the model. Now they have lost their relationship because of the model change.by riddlemethat