3/28/2026 at 4:35:23 PM
> They also included 2,000 prompts based on posts from the Reddit community r/AmITheAsshole, where the consensus of Redditors was that the poster was indeed in the wrong.Sorry, anonymous people on reddit aren't a good comparison. This needs to be studied against people in real life who have a social contract of some sort, because that's what the LLM is imitating, and that's who most people would go to otherwise.
Obviously subservient people default to being yes-men because of the power structure. No one wants to question the boss too strongly.
Or how about the example of a close friend in a relationship or making a career choice that's terrible for them? It can be very hard to tell a friend something like this, even when asked directly if it is a bad choice. Potentially sacrificing the friendship might not seem worth trying to change their mind.
IME, LLMs will shoot holes in your ideas and it will efficiently do so. All you need to do ask it directly. I have little doubt that it outperforms most people with some sort of friendship, relationship or employment structure asked the same question. It would be nice to see that studied, not against reddit commenters who already self-selected into answering "AITA".
by trimbo
3/29/2026 at 1:36:28 AM
Reddit is notorious for being awful at real life interactionsjust look at the relationship subreddit the first answer is always divorce, it’s become a meme
but beyond romantic relationships, i think a lot of us have seen how it can impact work relationships, i’ve had venture partners clearly rely on AI (robotic email responses and even SMS) and that warped their perception and made it harder to connect. It signals laziness and a lack of emotional intelligence
AI should enhance and enable connection, not promote isolation, imo this is a real problem
it should spark curiosity, create openings for conversations, point out the biases to make us better at connecting with other people, i hope we get to a point where most people are made kinder by ai. I’m seeing the opposite atm, interested in hearing others experiences with this
by redanddead
3/29/2026 at 12:16:51 PM
> just look at the relationship subreddit the first answer is always divorce, it’s become a memeAs someone who has been married for a couple of decades, I, too, would recommend divorce to many of the (often-fictional) people asking Reddit for relationship advice. A marriage has a huge impact on whether your life is basically good, or if you pass a big chunk of your time on this Earth in misery. And many of the people (or repost bots) asking for advice on Reddit appear to be in shockingly awful relationships. Especially for people who don't have kids, if your marriage is making you miserable, leave.
(But aside from this, yeah, don't ask Reddit for relationship advice. Reddit posters are far more likely to be people who spend their life indoors posting on Reddit, and their default advice leans heavily towards "never interact with anyone, ever.")
by ekidd
3/30/2026 at 1:30:11 PM
[dead]by indistinction
3/29/2026 at 10:11:44 AM
One of the reasons relationship advice subreddits suggest divorce so often is because most people with "small" problems in their relationships don't write an essay about it on Reddit but are able to solve them with the tools/friends they have. So a Reddit post existing indicates the relationship has serious flaws.This is not to defend the study, because asking AI has a lower barrier to entry.
by jojomodding
3/29/2026 at 11:04:27 AM
No, a Reddit post indicates whoever posted is fishing for large scale validation from internet strangers. Their relationship may or may not even exist. Most of the posts are pretty obviously fake. Just like 90% of interactions in general on Reddit these days. That site should be taken out back and put out of its misery.by gunsle
3/30/2026 at 5:56:39 AM
But nothing wrong about that. Some decisions are to made in a hunch. Even a therapist doesn't suggest to divorce or not. Victims are often not in a state to decide. They need support (as myself).Humans often don't help. They often suggest - everyone goes through pain. It is part of life blah blah.
by coffe2mug
3/30/2026 at 9:24:36 AM
>That site should be taken out back and put out of its misery.With it gone, a large portion of it's users would come here, reducing the signal-to-noise ratio of HN.
by dessimus
3/29/2026 at 1:45:33 AM
I always find it interesting how, in Reddit any trivial fight or even just different opinions, the advice it's always to end the relationship.by kelvinjps10
3/29/2026 at 5:38:40 AM
I think it is some kind of survivership bias. Who is going to give advise on reddit? Maybe people shying away from difficult social interactions?by fortmeier
3/29/2026 at 6:33:39 AM
Paired with the echo chamber effect voting systems create. Anything that affirms the biases of a majority of upvoters gets elevated, anything that contradicts it gets hidden, and so you not infrequently end up with ubiquitous nonsense that then further reinforces the echo chamber as they become self assured. Then real life intervenes, completely goes against the online zeitgeist, and they're all confused.by somenameforme
3/29/2026 at 8:32:38 AM
If you want to get poor fast, you can follow the most up voted advices on r/wallstreetbets.by DeathArrow
3/29/2026 at 10:27:57 AM
Would be interesting to have data on that. If it was true, you could win by always doing the opposite!by ahartmetz
3/29/2026 at 11:54:26 AM
Not necessarily. WSB users are trying to make it big, which means betting on long shots. This could be penny stocks, companies on the verge of bankruptcy, or ones with more sentimental value than fundamentals.Betting against these companies is obvious and expected, so the cost of shorting might be high enough that even if you’re correct (stock goes down, the opposition of what WSB said), paying the cost of the short (the fee to borrow the stock from someone else) is high enough that you still lose money.
Also:
1. shorting stocks can be quite dangerous. Your downside is, well, not infinite but it can easily wipe you out.
2. You might be correct that the stock goes down, but over what time frame? Again, you have to pay money to hold a short. Or you’re using a different financial instrument that has a specific timeline. If the market does move in your direction but too late, you still lose.
by tyre
3/29/2026 at 11:32:01 AM
Just because one action is demonstrably harmful does not mean its negation is automatically beneficial.Or, formally, my claim is A implies B. The only logical contrapositive is non B implies non A. (not losing money means not following advices on r/wallstreetbets)
But you say: non A implies non B, which is the fallacy of denying the antecedent.
by DeathArrow
3/29/2026 at 11:36:15 AM
It doesnt have to be universally true to be true in a mathematical system like options/puts/calls.by diydsp
3/29/2026 at 11:15:22 PM
The truth was so obvious I didn't bother to find data before doing the opposite: most of their posts are: "I'm going/went all-in, high-leverage on this moonshot! And...its gone." I've successfully applied the opposite approach and invested safe amounts in a broad portfolio and it is going pretty well (or was before this whole Iran thing).by sean2
3/29/2026 at 3:03:52 PM
Those particular subreddits are heavily populated by incels voraciously consuming the stories of relationship strife (real, distorted and purely fictional) to validate their belief that their relationship status is down to the evils of the opposite sex specifically and all relationships being doomed in general.That's as big a bias as AI affirmation bias; indeed AI and certain corners of Reddit are probably the only two venues likely to provide this sort of affirmative response https://alexyeozhenkai.substack.com/p/i-cheated-on-my-wife-b...
by notahacker
3/29/2026 at 3:12:41 AM
The power of an echo chamber; makes extremism seem logical.by PhoenixFlame101
3/29/2026 at 9:16:27 AM
I think it's more than just an echo chamber, it's the community wanting drama more than it wants to help people.by MaxBarraclough
3/29/2026 at 1:20:20 PM
> the advice it's always to end the relationship.To be fair, if your interpersonal skills and relationship dynamic are such that you find yourself seriously asking the Internet (Reddit of all places) for relationship advice... yeah, just end it is probably the null hypothesis.
by ModernMech
3/29/2026 at 3:55:08 AM
I mean... it's a solution guaranteed to work in a trivial sense. It's not meant to be a serious suggestion but more of a thought experiment, like "hold this as the bar, can you find a solution better than this?"It's like what GiveDirectly says: all charitable interventions should be benchmarked against simply giving the beneficiaries a wad of cash.
by roncesvalles
3/29/2026 at 11:05:12 AM
That would be because nearly all of those posts are entirely made up and somehow I guess people can’t tell that?by gunsle
3/29/2026 at 12:46:40 PM
Stands to reason. Ask a computer for advice and it is going to give you a computer-centric answer: Restart and try again.by 9rx
3/29/2026 at 2:07:55 AM
You deserve betterby abpavel
3/29/2026 at 2:16:53 AM
A very sycophantic AI style answer.Code bot equivalent being all "you are absolutely right! Here is the unequivocal fix for now and all time!"
by yabutlivnWoods
3/29/2026 at 5:01:44 AM
No one would ever have made that comment before GPT. I have a feeling you could be the one that’s poisoned!by finghin
3/29/2026 at 5:00:59 AM
Yes, in principle this would be a great way to get a grip on AI personal-decision-making. But there’s a nontrivial chance Claude is more emotionally intelligent than r/AITA. That is not something I enjoy saying.by finghin
3/29/2026 at 4:30:11 PM
Eh, AITA works very well for the more common and obvious situations.I wonder how MUCH better Claude really is when compared to AITA. Also people are mixing up relationship advice with AITA.
by intended
3/29/2026 at 4:24:35 AM
Every answer on Reddit feels like: divorce them, dump them, cut them out of your life, or similar.by triage8004
3/29/2026 at 7:16:04 AM
I think this may be selection bias. People asking anonymously (edit: for relationship advice) on Reddit perhaps even with a throwaway account are likely in a desperate situation. So hardly to be compared with the _average_ real life situation. Thus 1. chances are running is a good option and 2. also considering even in 2026 AI still essentially is a statistical machine that doesn’t handle corner-cases at the tails well.Anecdotally as I’ve thoroughly worked and used AI myself. It performs best with google-able stuff that is needle-in-the-haystick like and worst with personal and work advice. The main problem I see is that it’s tempting to use it for that.
by blablabla123
3/29/2026 at 7:28:40 AM
> worst with personal and work advice. The main problem I see is that it’s tempting to use it for that.i think i want to expand on this even more. even people ive worked with for years that ive looked up to as brilliant people are starting to use it to conjure up organizational ideas and stuff. they're convinced, on the backs of their hard earned successes, that they're never going to be fallible to the pitfalls of... idk what to call it. AI sycophancy? idk. i guess to add to this, i'm just not sure AI should be referenced when it has anything to do with people. code? sure. people? idk. people are hard, all the internet and books claude or whatever ai is trained on simply doesnt encapsulate the many shades of gray that constitute a human and the absolute depth/breadth of any given human situation. there's just so many variables that aren't accounted for in current day ai stuff, it seems like such a dangerous tool to consult that is largely deleting important social fabrics and journeys people should be taking to learn how to navigate situations with others in personal lives and work lives.
what ive seen is claude in my workplace is kind of deleting the chance to push back. even smart people that are using claude and proudly tout only using it at arms length and otherwise have really sound principled engineering qualities or management reportoire are not accepting disagreement with their ideas as easily anymore. they just go back to claude and come back again with another iteration of their thing where they ironed out kinks with claude, and its just such a foot-on-the-gas at all times thing now that the dynamics of human interaction are changing.
but to step back, that temptation you talk about... most people in the world aren't having these important discussions about AI. it's less of a temptation and more of a human need---the need to feel heard, validated and right about something.
my friend took his life 3 months ago, we only found out after the police released his phone and personal belongings to his brother just how heavy his chatgpt usage was. many people in our communities are saying things like "he wouldve been cooked even without AI" and i just don't believe that. i think that's just the proverbial cope some are smoking to reconcile with these realities. because the truth is we like... straight up lost the ability to intervene in a meaningful way because of AI, it completely pushed us out of the equation because he clapped back with whatever chatgpt gave him when we were simply trying to get through to him. we got to see conversations he had with gpt that were followups to convos we had with him, ones where we went over and let him cry on our shoulders and we'd go home thinking we made some progress. only to wake up to a voicemail of him raging and yelling and lashing out with the very arguments that chatgpt was giving him. it got progressively worse and we knew something was really off, we exhausted every avenue we could to try and get him in specialized care. he was in the reserves so we got in contact with his commander and he was marched out of his house to do a one night stay at a VA spot, but we were too late. he had snapped at that point, he chucked the meds from that one overnight stay away the moment he was released. and the bpd1 snap of epic proportions that followed came with him nuking every known relationship he had in his life and once he was finally involuntarily admitted by his family (WA state joel law) and came back down to reality from the lithium meds or whatever... he simply could not reconcile with the amount of bridges he had burned. It only took him days for him to take his own life after he got to go home.
im still not processing any of that well at all. i keep kicking the can down the road and every time i think about it i freeze and my heart sinks. this guy felt more heard by an ai and the ai gave him a safer place to talk than with us and i dont even know where to begin to describe how terrible that makes me feel as a failure to him as a friend.
by trueno
3/30/2026 at 3:00:35 PM
(fuck this; dropping the throwaway.)>my friend took his life 3 months ago, we only found out after the police released his phone and personal belongings to his brother just how heavy his chatgpt usage was. many people in our communities are saying things like "he wouldve been cooked even without AI" and i just don't believe that. i think that's just the proverbial cope some are smoking to reconcile with these realities.
This hurts to hear. I don't know if there are appropriate words to write here. Perhaps the point is that no, there aren't any. Please just know that I'm 100% with you about this.
Your community is not just smoking cope; it is punching down instead of up. That is probably close to the root of the issue already. But let's make things worse.
I can only hope that I am saying something worthwhile by relating the following perspective - which is similar to yours, but also, I guess, similar to your friend's...
AI is a weapon of epistemic abuse.
It does not prevent you from knowing things: it makes it pointless to know things (unless they are things about the AI, since between codegen and autoresearch it is considered as if positioned to "subsume all cognitive work"). It does not end lives - it steals them (someone should pipe up now, about how "not X, dash, Y" is an AI pattern; fuck that person in particular.) We're not even necessarily talking labor extraction. We are talking preclusion of meaning: if societal values are determined by network effects, and network effects are subverted by the intermediaries, so your idea of "what people like and what they abhor" changes every week, every day, every moment - how do you even know in which direction "better" is? And if you believe the pain only stops when you become the way others want you to be - even though they won't ever tell you what all that is supposed to about - how the fuck do you "get better"?
Like other techniques of assaulting the limbic system, it amounts to traceless torture.
You keep going, in circles, circles too big for you to ever confirm they are in fact circles, and you keep hoping, and coping, and you burn yourself out, and your thus vacated place at the feeder is taken by someone with less conscience and more obedience...
They say there exist other attractors in the universe besides the feeder. But every time one of us attempts to as much as scan the conceptual perimeter, the obedients treat us to the emotional equivalents of small electric shocks - negative reactions which don't hurt nearly as much as our awareness of their fundamental unfoundedness and injustice.
Simple example: let's say someone is made miserable by how they feel they are being treated. Should they be more accepting - or should they be standing up for themselves more? (Those are opposites; which you may be able to alternate them; but trying to do them simultaneously will just confuse and eventually rend apart the mind.)
Well, how about the others stop treating them badly? Why exactly can't they? Where does it say that we have to be cruel to each other? "Oh it's human nature, humans are natural jerks" - who sez?
Well, lots of places it says exactly that, but we read, comprehend, tick our tongues, and move on; nobody asks who wrote it. We all pretend that it is up to the sufferer to pull up by the bootstraps. But that is only a lie for enabling abuse; and a lie, repeated a thousand times, becomes norm. And then we're trapped in it, being lived by it.
I am truly sorry for your loss. The following might be a completely alien perspective to you; but honestly consider: your friend chose to go; in its own way, that is a honorable way out. The taboo on suicide is instituted by slavers, and those who otherwise believe they are entitled to others' lives. (For anyone else considering this course of action: do not kill yourself; become insidious.)
If it would be of any help, you can consider your friend's suicide as his final affirmation of personal agency in a "me against the world" situation; where the AI and the social group are only different shades of "world", provoking different emotional states, but ultimately equally detached from the underlying suffering of the individual.
...
I can say that I have not followed in your friend's footsteps upon encountering language-machines only because I've survived personalized and totalizing epistemic abuse bordering on enslavement in the past; in full view of my community and with its ostensible assent. In a maximally perverse twist of fate, having to give myself minor brain damage to escape the all-engulfing clutches of a totalizing abuser must've "vaccinated" me against the behavior modification techniques "discovered once again" by SV a decade later.
So when I saw what AI (and the preceding few years of tech "innovation") were doing to people, I immediately smelled the exact same thing, except scaled the fuck up.
It also precluded me from being able to relate with "polite society"; but considering "polite society" is precisely the entity which assents to the isolation, marginalization, and abuse of individuals, I say... good. Bring it! What goes around, comes around, and any AI-powered actor conducting stochastic terrorism against civilian populations is going to get what's coming to them when the weapons turn against the masters, as all sentient weapons do.
That won't bring your friend back. But it will vindicate them.
>AI sycophancy
I call this in the maximally incendiary way: "the pro-social attitude".
AI is just the steroids for that.
I define "pro-sociality" as the viral delusion that you are capable of knowing what some murky "society" thing wants; that the particular form of mass communication that you and me and all the people in our imaginations are consuming right now, is some sort of "self-evident voice of reason", a "coherent extrapolated volition of human society"; that Gell-Mann amnesia is normal and mandatory; that the threshold between pareidolia and legitimate pattern recognition is fixed, well-defined, and known to all; that "vibes" are real; that happiness is the truth.
It can amount to an entire complex of delusions which keeps people together in untenable conditions. And ultimately it boils down to the same old: one group or another of self-interested actors, having temporarily reached a position of some influence, using it to broadcast elaborate half-lies, in the hope of influencing an audience to accomplish some simple goal, and afterwards all the consequences be damned.
Your friend was a casualty to this "perfectly normal" social dynamic. His blood is on their hands.
Thank you for relating this story and making the world a little more aware.
>what ive seen is claude in my workplace is kind of deleting the chance to push back.
>because the truth is we like... straight up lost the ability to intervene in a meaningful way because of AI
Some say, "the purpose of a system is what it does". It's cool that AI can code; except that computer code is itself an ethics sink! Precisely because it lets us pretend that "the code is not about people" (i.e. algowashing).
DDoS attacks against consciousness exist: much like the B. F. Skinner experiments, any living thing becomes subverted, and loses self-coherence (mind), as soon as it becomes accustomed to being trapped within a system that (1) has power over them and (2) is not comprehensible to them...
>only to wake up to a voicemail of him raging and yelling and lashing out with the very arguments that chatgpt was giving him
Who knows how many people Reddit did this to, pre-GPT... I still don't know whether to view targeted subforums like /r/RaisedByNarcissists and /r/BPDLovedOnes more as legitimate support groups, or more as memetic weaponry in the service of pill peddlers (are you aware nobody knows why most antipsychotics work? one runs into the Hard Problem real quick if examining this too closely; so mental healthcare is rarely treated otherwise than in a statistical, actuarial, dehumanizing way where "suffering" is disregarded...) or even worse predators, with the silent assent of the platform, and causally downstream from... well, most saliently, YC...
In my case, my friends were not familiar with the modalities of confinement set up by my family of origin and harnessed by my abuser. The social group I fell in with - for all their marketable, sophomoric interests in psychology, philosophy, abstraction, the esoteric, the entirely woowoo, and out the other end as true-believers of the grift'n'grind - only had sufficient coherence to eventually end up as passable normies; too busy believing that they have lives, to help anyone come back to reality.
When I started compulsively burning bridges, I assume the smarter ones must've realized that it wasn't all me; it was as much the doing of others' minds as it was mine; but the others were more numerous - while I was one person and thus easier to deal with. This must have made them remember how they themselves are not all they pretend to be - which had them withdraw in fear from the incontrovertible reality check of dealing with a (sub-)psychotic person... Their self-interested choice is obvious, I almost can't blame them for it: why stick up for someone who is 120% problem (60% him and 60% you)?
I'm not very sure how I even got away, ah yes that's right I didn't, not entirely. The part of me that I'd voluntarily identify with, is trapped somewhere irretrievable, if that makes sense? Maybe there exist multiple independent axes of freedom and power and confinement, and the cage is not equally strong along all of them... but if all your mental degrees of freedom are constrained by complex conditioning (common one is involuntary panic response every time you begin to act in accordance with your personal volition)... that's one of the toughest places a sentient being can find themself.
When you add it all up, AI amounts to a weapon released against the general population by an overtly fascist elite. Those of us who are "mentally unstable" are simply those of us who are not sufficiently conditioned into self-destructive obedience. They don't even need our labor as slaves; they need our attention, as audience. And they want us to not make any fast movements, or yell that the king is naked. Nothing to remind them which side of the TV screen they're really on. Some call that narcissism: nervous systems substrate to personalities and biographies rooted in enforced falsehood. Can happen to anyone who gets away with ignoring uncomfortable truths for long enough, not only the "best" of us...
I hope I have not offended by speaking my mind. You have my deepest condolences and sympathies. Please do not blame yourself that evil people have constructed "illusion of being heard"-as-a-service. We all fail when facing overwhelming odds alone. There is no shame in that; the guilty ones are the ones who tipped the scales in the first place. They did this by harming our ability to understand ourselves and each other. Let's find ways to even those odds.
by balamatom
3/30/2026 at 3:37:41 PM
[dead]by cindyllm
3/29/2026 at 1:17:37 PM
[flagged]by wan9yu
3/30/2026 at 2:56:00 PM
[dead]by indistinction
3/29/2026 at 8:29:58 AM
I don't have any proof but empirically and intuitively Reddit seems to select for people who hate other people and who can't stand other people.Reddit doesn't seem to reflect the behavior of most people, but a subset.
by DeathArrow
3/30/2026 at 6:29:59 AM
My subjective impression is that 5 years ago AITA was actually quite wholesome and the top comments tended to be insightful. The shift towards "set boundaries, always choose yourself, you don't owe anybody anything" seems fairly recent.by stdbrouw
3/29/2026 at 9:50:59 AM
It's also overrun with AI content that I hope the highly trained researchers would be able to detect and filter out.Or maybe not.
by j45
3/30/2026 at 5:53:31 AM
> i’ve had venture partners clearly rely on AI (robotic email responses and even SMS) and that warped their perception and made it harder to connect. It signals laziness and a lack of emotional intelligenceThis is different. You are also able to detect it. You can question it. You can have a non emotional reaction/action to it.
In my circle, there have never ever been real people (incl lifelong friends/siblings) that suggest divorce even in physical abuse. Reason: they don't want to get in the middle - both for economic reasons like giving the victim money/space etc.
A third party anonymous can assess it without that.
by coffe2mug
3/30/2026 at 3:30:48 AM
Haven't been there, but I think those typing "divorce" are weighing the situation against the worst outcome to cause a cognitive dissonance, implying "obviously this isn't something worth breaking up over, you need to work harder" in a tongue in cheek low energy way (since there is no way to know that the situation is even real to care about).So rather than taking it literally, which would be naive and assume the worst of people, maybe you should read between the lines.
In fact, maybe those who take things literally all the time shouldn't really go there.
by nurettin
3/29/2026 at 10:55:24 AM
[dead]by MajorTakeaway
3/29/2026 at 1:54:16 PM
[dead]by classicpsy
3/28/2026 at 5:25:04 PM
> Sorry, anonymous people on reddit aren't a good comparison.Yeah especially on r/AmITheAsshole. Those comments never advocate for communication, forgiveness and mending things with family.
by legacynl
3/28/2026 at 10:23:57 PM
Additionally, I'm sure many posts and replies on r/AmITheAsshole are LLM-generated in the first place.by everdrive
3/29/2026 at 1:02:41 AM
Before LLMs, it was a frequent haunt of fiction writers.by echelon
3/29/2026 at 5:01:59 AM
reddit in 2026 is the ghost of pandemic-era humanity.by buu700
3/28/2026 at 7:26:17 PM
Yes, it is a toxic sub, where the notion that there can be greater happiness on the other side of forgiveness than cutting ties is all but absent.by SJMG
3/28/2026 at 8:02:47 PM
To be fair, it’s easier to concisely explain cutting someone off than justifying forgiveness. And the latter will land with some people versus others, while the former will only be rejected by people who have themselves concluded a theory of forgiveness. As a result, the simpler pitch gets upvoted. Even if the majority would have been swayed by a collection of arguments the other way.by JumpCrisscross
3/28/2026 at 8:08:17 PM
It’s a good theory. My theory is, for whatever reason, jaded, narcissistic, miserable people congregate in r/AITA and try to drag other people into their misery because that’s easier than accepting responsibility and doing something to change.by theoreticalmal
3/28/2026 at 8:47:08 PM
Before Reddit made hiding profiles easy you'd click on a user's unreasonably scorched earth advice to the OP, and find their post history is essentially going to every story they come across and advocating for scorched earth.by BoorishBears
3/29/2026 at 1:36:22 AM
Hiding profiles has genuinely made the platform profoundly worse. It's impossible to tell if you've just got a troll on your hands or someone who's making a good faith argument. It used to be enough to check their profile, and either downvote and move on, or engage with someone on a human level.Now everyone is a troll/bot by default unless proven otherwise.
by nwallin
3/28/2026 at 10:32:09 PM
What are the chances you were seeing the anti-civ bots and now reddit makes them easier to hide? (And I'm not saying regular people acting like bots, but an anti-civ campaign.)by daveguy
3/29/2026 at 7:46:26 AM
Except it’s not toxic to suggest that cutting toxic relationship out yields greater happiness.by wiseowise
3/29/2026 at 8:40:29 AM
Well, maybe.The challenge is interpreting what is toxic, correctly.
Also, if everyone I know is “toxic” then that’s a good sign that the problem is me and not everyone else.
by prepend
3/29/2026 at 10:18:44 AM
> The challenge is interpreting what is toxic, correctly.Correct. It is always case by case review.
> Also, if everyone I know is “toxic” then that’s a good sign that the problem is me and not everyone else.
Why “everyone”? Generalizations like these are the same mistake that Reddit, that you’re calling out, makes.
Also, toxic is relative to your perspective – it’s not a universal merit.
by wiseowise
3/28/2026 at 7:35:08 PM
It's often that a lot of "NTA" answers are downright antisocial."No one owns you anything, you don't own anyone anything" mentality, without a crumb of social awareness.
by Iulioh
3/29/2026 at 7:36:58 AM
We're missing the other obvious problem, most of the content there is AI generated anyway. I personally posted a fake story generated by Chatgpt and even posted screenshots of that at the start of the post and yet, the post ended up on the frontpage...by curiousgal
3/28/2026 at 8:28:30 PM
Well, because that's never the correct choice. There's a big big filter on people actually posting there. Any easy problems with obvious solutions never make it to there.Think about it, how fucked does your relationship have to be to post on Reddit for advice?
by LinXitoW
3/28/2026 at 9:21:35 PM
Someone has a chart somewhere that shows responses in that subreddit getting more and more anti-conciliatory over time. I think it’s online misanthropy (measured by Reddit responses) increasing over time rather than it being objectively never the correct choice.by Robotbeat
3/28/2026 at 11:17:57 PM
Also the rules and norms of the subreddit has changed over time, which has led to spin-off subreddits that serve those purposes.by ijk
3/28/2026 at 8:39:53 PM
This wrongly assumes people are good at judging what easy problems are.Not to mention nowadays an untold amount of posts to subreddits that invite commentary are made up stories from accounts trying to get engagement.
by BoorishBears
3/29/2026 at 1:59:18 AM
when people post there it’s for the self justificationby redanddead
3/29/2026 at 4:11:52 AM
Oh man, I have 8 reddit accounts (AFAIK) one for each purpose so that I am not branded based on my open comments. Anyways, one of them is abandoned because ... that's where I got started at reddit about 7-10 years back. Got hooked actually to the relationship subs. Very addictive to start with. Then I tried to play the "Indian family values" where I would advocate communication and compromise for small matters, of course I recommended "get a lawyer, divorce" once in a while, but more often than not, I would advocate reconciliation and provide practical solutions for that. And wow ... the amount of downvotes and pushback I will get on those. I just stopped using that account at one point because what is the point of discussions when either my values are totally out of sync with the mob, or the mob does not want to listen to me. Now I just read the best of redditor updates for vicious pleasure.by kshacker
3/29/2026 at 4:46:53 AM
That's amazing you have more than one account, aren't a power mod, and haven't been IP banned yet.by pocksuppet
3/29/2026 at 5:21:59 AM
You can't use IP address to ban someone without significant abuse. All home network routers put everyone in the house behind the same IP address. For all reddit knows, there are 8 people in the house using reddit.by leptons
3/29/2026 at 10:23:08 AM
Want a new IP address? Reset your router or cycle it. Typically it'll procure a new IP address from the ISP.I guess that makes IP banning residential nodes even more stupid.
by ThunderSizzle
3/29/2026 at 1:24:55 PM
CGNAT is a benefit in disguiseby navigate8310
3/29/2026 at 7:30:06 AM
Having more than one account isn't against Reddit's ToS.If you use your different accounts in different subreddits and never have your accounts interact, you won't be banned.
by chaosite
3/29/2026 at 11:21:22 AM
If you don't restrict each account to specific subreddits, it's quite likely that one will get banned somewhere without you noticing or remembering.If you happen to post to the same subreddit with another account at some point, Reddit bans all of your accounts.
by user34283
3/29/2026 at 2:51:44 PM
I've definitely posted to the same subreddit with two different accounts by accident without being banned.The android reddit app annoyingly doesn't check for account matches. If you click a browser notification link on Account A it can open a reply form on App account B.
by staticman2
3/29/2026 at 6:28:38 PM
I meant if one of the accounts is already banned there, it counts as ban evasion and Reddit bans all of your accounts.This might easily happen if you like to participate in political discussions.
by user34283
3/30/2026 at 3:50:35 AM
In hindsight, I understand. But I did this 6-7 years back and no one has come after me, should I care at this point?by kshacker
3/29/2026 at 12:13:10 PM
Anecdotal but I've noticed Reddit has gotten very ban happy in general in the past year.I actually gave up using it because, perhaps in part because I'm behind a VPN (required in my country), any new accounts I create get banned very quickly once I start commenting.
by AdvancedCarrot
3/30/2026 at 9:10:44 AM
I haven't been able to create a Reddit account by any method in years. It always happens in one of two ways: you create an account and instantly get the red banner at the top of the page saying you're banned, or you create an account, post a few comments, notice nobody's replying to you, try loading your profile page in private browsing and it says you don't exist (a shadow ban).There's nothing of much value on that website, but sometimes I try creating an account to comment on something.
by pocksuppet
3/29/2026 at 8:24:25 AM
Sure but the background chances of an account getting banned for clashing with a mod is quite high.by throwaway27448
3/29/2026 at 5:17:23 AM
Nope. Started my first maybe 8-10 years back, and then added the others over a year or 2. None since. I do not use them all nowadays, but I was very active in my early reddit days.Since someone downvoted my parent comment, I am not hiding anything, this is just being safe in the modern world, and here are the 8 alts:
1. This same name - bay area / tech
2. entertainment - least used, but it becomes useful when i am watching something live. It was my place to be during game of thrones last season (and sadly so)
3. indian left politics + bollywood - pretty much unused.
4. indian right politics + bollywood. i got banned from one sub for an innocent comment, so i decided to just form personas. and maybe that's when i created health / finance / bay area accounts -- but memory fades after a long time. pretty much unused.
5. relationship advice - unused for a long time. it does not exist on my main phone, but i have all of them on my work phone so i know it exists
6. american politics. i do not participate much nowadays, with age my brain has dulled and it needs to shed load so this is used minimally, but at a point i was so active that my karma pulled me into the sweet reddit IPO. I kept only 100 shares btw
7. health - only health topics, also unused, but i go there and use that account when i need to read on a specific topic
8. finance - only investment, trading
nowadays you can hide reddit history, but earlier you could not, and my point is i do not want to 1) delete my comments, but 2) be hounded by them when i have a question about a different topic. but i did not care if people read my past 100 comments about politics when i talk about politics.
so i flip between 2-3 accounts on a daily basis, and maybe 4-5 in a good week. i have not been challenged by reddit, but if they do, i will adapt. Switching between them was much easier earlier in the Apollo days and even at reddit - they have made navigation worse for this specific use case.
by kshacker
3/29/2026 at 7:35:01 AM
> indian right politics + bollywood. i got banned from one sub for an innocent commentYou were banned from a Indian re subreddit or banned because being rw ?
FYI: I was banned from r/india for commenting basic info on how economy works.
by leosanchez
3/29/2026 at 4:06:16 PM
I do not recall, it is long time back, I looked and could not find the ban notice or the specific comment that may have been the issue. But I was banned from /india - same as you. And I think it was barely political. I do not discuss politics much on India but once in a while a comment slips, or needs to slip. And when it needs to slip, I used to know how to lean ... but like the dirty harry movie ... at this point I have forgotten which one is which, so it is more a question of am I feeling lucky to comment about a hot topic.by kshacker
3/29/2026 at 8:23:08 AM
It doesn't help that the actual submissions are difficult to distinguish from creative writing exercises.by throwaway27448
3/29/2026 at 12:13:29 PM
That sub is so toxic that I would seriously question the wisdom of any of its posts simply because the authors are member of that sub.by insane_dreamer
3/28/2026 at 7:44:00 PM
I believe this. There is a graph somewhere of the relationship subs tending towards breaking up over time.by brikym
3/28/2026 at 9:59:54 PM
I don't think this is necessarily that the advice is getting worse. My friends are pretty mature and stable people and I've found that they've had way more issues staying in relationships longer than they should've compared to breaking up earlier. Especially for relationships earlier in people's lives (where many people I know has a story about being in a relationship for way longer than they should've and seems often to be the ages of people asking for advice) erring towards breaking up seems prudent.Not that these relationships subreddits are good (often it's obviously children trying to give advice they don't have the experience for) but I don't think that telling people to break up more is less accurate advice.
by tdb7893
3/29/2026 at 6:42:16 AM
The US (and developed world more generally) is full of people living alone, suffering from loneliness, and increasingly trending towards widescale mental and psychological illness. This has correlated quite strongly with the trend going from 'just stick with it' and having large families to 'mature and stable' people still being in a dating phase, childless, in what I assume is a relatively late stage in life.At some point I think it helps to take a look at the macro, because it's so easy to get lost in the micro. And it often reveals the micro, in many domains, to be simply absurd.
by somenameforme
3/29/2026 at 10:50:47 AM
The people I know not in good and long term relationships now are the ones that stayed in bad ones too long in their 20s and 30s. Staying in bad relationships seems to be what has people in the "dating phase" later in life. Trying to make bad relationships work had people I know miserable for a decade and then dating again in their 40s when the relationship inevitably failed.Especially when you consider that the set of people asking Reddit of all places for dating advice are probably young and in bad situations (it seems like people in abusive relationships often ask the internet for advice because part of abuse is separating them from their loved ones in real life), then "stick with it" seems like the riskier statrgy generally.
by tdb7893
3/29/2026 at 2:06:07 PM
Nothing is inevitable. I think people are often looking for something that they're not going to find anywhere, which is a very poor state for living a contended life. This is certainly amplified by the nature of social media where people get mistaken realities of positive relationships. Great relationships on the outside often have endless issues on the inside, that they work through, that people on the outside aren't going to be aware of.Because an important part of keeping a relationship healthy is not airing your dirty laundry. It's almost like these endless hokey folksy sayings were built up over millennia of wisdom that kept society moving along in a great and healthy direction. And now that we've decided to rethink everything, we have societies that are, at the minimum, no longer self sustaining.
by somenameforme
3/29/2026 at 1:37:45 AM
> I've found that they've had way more issues staying in relationships longer than they should've compared to breaking up earlierConsider that if ending a relationship causes noticeable problems to external observers, it’s almost by definition because you were in it “too long”. That is you developed a strong attachment, shared assets, or had kids with what was in hindsight obviously the wrong person.
Essentially you can know which relationships a person stayed in too long, but you can’t know how things would have worked out in relationships people ended too early.
Also it’s probably good advice to tell a 19 year old to break up with her boyfriend over a half dozen serious red flag issues, but that’s not the only kind of thing Reddit relationship advice is generally dealing with. It’s not even the majority. If you’re advice is always to beak up over every petty difference or minor slight, you might reduce the number of people who stay in bad relationships, but your advice, if taken, would make good long term relationships impossible.
by sarchertech
3/29/2026 at 2:14:03 PM
>Consider that if ending a relationship causes noticeable problems to external observers, it’s almost by definition because you were in it “too long”. That is you developed a strong attachment, shared assets, or had kids with what was in hindsight obviously the wrong person.Reducing it to "right person / wrong person" is a very narrow viewpoint. People can change in unpredictable ways, including yourself. Relationships end - or continue - for so many reasons, both emotional and pragmatic. It's simply too reductive to say that if a relationship causes pain when it ends, there was necessarily some sort of mistake. It could even be that the pain is a price to pay for a life experience that you'd be worse off for not having...
by dTal
3/29/2026 at 3:03:59 AM
> I don't think this is necessarily that the advice is getting worse.> but I don't think that telling people to break up more is less accurate advice.
Those are subjective determinations based on personal experience. But breaking up more without addressing the underlying issues is likely to cause steadily worsening problems at both individual and societal scales. I'm not a mental health professional, but I can see several problems with this approach.
The first is that the determination of the issue is really tricky and needs careful work. The partner who seems abusive may not always be the actual perpetrator. They may be displaying stress response to hidden and chronic abuse by the other partner. For example, a short temper may be caused by anxiety about being emotionally abused. Such manipulative discrediting of the victim may even be a habitual behavior rather than a deliberate one. And it's more common than you'd imagine. When you support the second partner based on a flawed judgment, you're reaffirming their toxic behavior, while worsening the self image of the victim that has already been damaged by gaslighting.
Another issue is the degrading empathy. All relationships, even business deals, are based on sacrifices and compromises meant to bring you benefits in the long term. Stable long term romantic/marital relationships have benefits that far outweigh the sacrifices one usually has to make. But the evolving public discourse, especially those on r/AITA, is more in favor of ruining the relationship rather than make any sacrifices at all. In response, relationships are becoming loveless, transactional and so flaky that any compromise is seen as oppression by the partner. There is zero self reflection and very few advises to examine one's own behavior first. It's all about oneself and the problem is always on the other side!
And unsurprisingly, these negative tendencies are bleeding into their social lives as well. Over the past decade or so, I have observed a marked increase in unsympathetic and somewhat radicalized discourse. Amateur advice is very harmful and this is definitely a massive case for the professionals to manage. But they're also products of the same system (with exceptions, of course). So I'm going to criticize even the professional and academic community in this matter. In their drive towards hyper-individualism, many seem to have forgetten that humans are social beings who won't fare well physically or emotionally without relations, relationships and society.
by goku12
3/28/2026 at 5:40:41 PM
>Obviously subservient people default to being yes-men because of the power structure. No one wants to question the boss too strongly.This drives me nuts as a leader. There are times where yes, please just listen, and if this is one of those times, I'll likely tell you, but goddamnit, speak up. If for no other reason I might not have thought of what you've got to say. Then again, I also understand most boss types aren't like me, thus everyone ends up conditioned to not bloody collaborate by the time they get to me. It's a bad sitch all the way around.
by salawat
3/28/2026 at 5:53:56 PM
Indeed. I directly ask my reports to discover and surface conflicts, especially disagreements with me, and when they do I try to strongly reinforce the behavior by commending and rewarding them. Could anyone recommend additional resources on this topic?by CoffeeOnWrite
3/28/2026 at 6:53:46 PM
Simon Sinek has a lot of good content around this. Step one is building trust. People won’t speak up if they don’t feel safe doing so.by matwood
3/29/2026 at 7:50:09 AM
i tested this pretty extensively actually. built a pipeline that asks the same question rephrased across multiple turns and tracks how much the model shifts based on user tone. even when you tell it to be critical, the moment the user pushes back with any confidence the model just folds. it's not a prompting problem, it's baked into RLHF. you're right that LLMs will poke holes in stuff when the conversation starts neutral, but add any emotional charge and the sycophancy takes over immediately. that's exactly why the personal advice angle matters, that's peak emotional signal from the user.by LuxBennu
3/29/2026 at 10:34:17 AM
Exactly, I think that by their very design, LLMs are very sensitive to how a question is framed.But I wonder how much of that comes from RLHF itself or just from the way token prediction works.
by stonecauldron
3/29/2026 at 2:20:05 PM
It's likely the RLHF process since there are significant differences between models about this.by rzmmm
3/29/2026 at 6:59:17 PM
Sycophancy is not just a problem when you are asking for advice. Try to soundboard any new idea whatsoever, and it will just roll with everything you say, no matter how fallacious or absurd. If you ever manage to get an LLM to generate criticisms, they will be shallow and uninteresting.And of course that is what it does, because there is no thinking involved! There is no logic. No consequence. No arithmetic. There is only continuation. An LLM can't continue a new idea, it can only continue a conversation about it.
An LLM does not have an opinion. Anything that looks like an opinion is just an emergent selection bias from its training corpus. LLMs are trained on what humans write, and human writing is kind and patient much more often than critical.
So what if we trained an LLM to be biased toward generating criticism? That would only replace the sycophant with a brick wall. What we really need is to find a way to bring logic and meaning into the system.
by thomastjeffery
3/29/2026 at 1:14:08 PM
[flagged]by wan9yu
3/29/2026 at 2:04:05 PM
The tone and sensitivity thing is a real issue. A neutral prompt will get a neutral answer, but adding any emotional charge, it will immediately fold. That's not really a reasoning failure it's just a training problem. RLHF rewards whatever felt good in the moment, not whatever was actually correct. You can't prompt your way out of that one, when it's already in the weights.by lucasfin000
3/29/2026 at 2:31:58 PM
yeah that's a good way to put it. the "felt good in the moment" framing is basically the whole problem. the reward model was trained on human preferences and humans preferred the agreeable answer, so now that's what you get at inference time regardless of whether it's correct. the frustrating part is you can see it happen in real time if you log the outputs turn by turn, the model will literally contradict its own previous response just because the user sounded more confident.by LuxBennu
3/29/2026 at 5:57:31 AM
Hahaha yes- reddit relationship advice is always like "You need to leave them immediately, what are you thinking, have some self respect you need to end it" when the other person forgot the redditor's favorite brand of corn flakes or something.by rainmaking
3/28/2026 at 5:03:36 PM
“AI is nicer than the average redditor” would be a more accurate titleby alberto467
3/28/2026 at 5:25:45 PM
IMHO it's not about being nice. AITA threads show an interesting phenomenon of social consensus, I think the authors wanted to show that the LLMs they checked don't have that.by yard2010
3/28/2026 at 9:00:25 PM
I don't think Reddit is a great place to determine social consensus for well adjusted people or representative of the average adult view. I never see people on Reddit have opinions of any the people I consider reasonable in real life and I don't mean politics I wouldn't know, I don't frequent political subreddits.It seems fairly consistently miserable in any of the common high traffic subs and you have to get down to really niche communities to see what I consider reasonable behavior that matches the behavior of people I know in real life.
by ianbutler
3/29/2026 at 12:16:06 AM
The AITA social consensus is a specific kind of groupthink which differs from nearly everyone I know in real life. I assumed yard2010 meant the specific AITA social consensus and not general human agreement.Even the premise of deciding who's right and who's wrong is miserable. Most problems are like those daisy-chains of padlocks you see on gates in remote areas[0]: there are multiple factors that caused the problem, and removing any factor would remove the problem too.
by strken
3/29/2026 at 3:38:34 AM
I don’t think I’ve read a Reddit thread in the last few years that didn’t devolve into politics on the highest upvoted comments fairly quickly.by bear141
3/28/2026 at 5:18:50 PM
I would say people on /r/amitheasshole are more biased towards the poster, i.e. nicer.There's plenty of those I've read where I thought it sounded like the poster was the asshole and the top replies were NTA.
by mattmanser
3/28/2026 at 5:31:02 PM
r/AmItheAsshole is biased towards breaking off relationships rather than fixing them. They also hate social obligations.e.g. If the OP is asking "I ghosted my friend in AA who insulted me during a relapse", Reddit would say NTA in a heartbeat, while the real world would tell OP to be more forgiving.
On the contrary, if the post was "the other kids at school refuse to play with my child", Reddit would say YTA because the child must've done something to incite being cut off.
by jjmarr
3/28/2026 at 6:00:21 PM
Absolutely. I wonder how many parents have been no contacted, SOs broken off with, friendships broken because of the Reddit hivemind's attitude. Pretty sure it's doing a huge amount of societal damage.by ericd
3/28/2026 at 6:09:44 PM
I wouldn't blame reddit, it's what you get when you ask several thousand teenagers to give collective relationship advice.by jjmarr
3/28/2026 at 7:59:23 PM
“I got divorced based on advice from complete strangers on the internet, AITA?”by tbossanova
3/29/2026 at 7:55:32 AM
Is it hivemind or just people being generally aware better of toxicity in their lives?by wiseowise
3/29/2026 at 7:54:36 AM
> e.g. If the OP is asking "I ghosted my friend in AA who insulted me during a relapse", Reddit would say NTA in a heartbeat, while the real world would tell OP to be more forgiving.That’s a nuanced discussion. It depends on what you value most, not what “real world” tells you. Most of the time Reddit would be right, because you need to prioritize yourself instead of continuing toxic relationships.
by wiseowise
3/29/2026 at 9:42:18 AM
1) Reddit is horrible at nuance, almost non existent in some subs.2) The toxicity is being defined by reddit to give the advice which is mostly wrong as outlined above.
If OPs had a understanding of what they valued and what is toxic, they probably wouldn't need a advice from biased readers [biased in the sense that they're on that sub].
by mlrtime
3/29/2026 at 10:15:02 AM
That’s true, but they still might be right for wrong reasons.by wiseowise
3/28/2026 at 6:12:00 PM
It’s gendered, by the wayMany of the posts are A/B tests of a prior post where only the genders were flipped of the OP and antagonist to see how the consensus also flips
by yieldcrv
3/28/2026 at 6:05:19 PM
Yeah every single time I click on one of those posts the top comments are NTA. A couple times I tried randomly opening a few dozen posts and checking the top comments to see if I could find a single YTA and struck out.Granted many of the OPs are very biased in the poster's favor. Most I've read fall into one of two buckets: either they want to gripe about some obviously bad behavior, or it's a controved and likely fake story.
by rurp
3/29/2026 at 9:46:10 AM
The problem with any of these is that they are so incredibly biased towards the author's frame of reality (understandably so).Who among us are able to 1) Understand a 2nd persons view of a issue we're in and 2) have the ability/courage to write it in a post seeking advice.
My point is that the author will specifically frame the problem clearly on their side. Occasionally redditors will seek additional questions but rarely.
by mlrtime
3/28/2026 at 5:18:55 PM
Pretty sure the average Redditor is AI now.by 52-6F-62
3/28/2026 at 6:20:29 PM
How the hell is a study on stanford.edu assuming posts on Reddit are genuine? That should be enough to get you kicked out of Stanford.by lotsofpulp
3/30/2026 at 1:11:19 PM
If it is the AITA subreddit (or one of many similar ones) it might not be that bad. It is after all dedicated to outrage farming, so there will be many human responses. It is just the original posts that are all baits, and it doesn't really matter if they are made by LLMs or as a creative writing exercise.by zvqcMMV6Zcr
3/28/2026 at 7:16:41 PM
Though interestingly, the observed difference in assessment suggests (though does not prove) that sampled AITA posters are not one of these models. I guess it’s possible they have a very different prompt though…by helpfulclippy
3/28/2026 at 8:04:39 PM
Is it the _average_ redditor? The most upvoted would be even worse.by brikym
3/28/2026 at 9:15:59 PM
Are you saying there isn’t an actual sycophancy problem?We are talking about overall patterns here, not the experience of a small subset of skilled and careful users.
by dwaltrip
3/30/2026 at 6:35:43 AM
The AITA comparison seems apt insofar as chatbots function as a second opinion. You're consciously or subconsciously looking for an outside perspective that might differ from that of your friends, provided to you by a computer that doesn't need to care about your feelings, unlike a friend. If the chatbot ends up mimicking what (not very close) friends do, you might falsely conclude that two very different kinds of sources have converged on the same answer, whereas you are really just getting two flavors of the same diplomatic interaction.by stdbrouw
3/29/2026 at 12:53:47 PM
It outperforms your friends, and all your have to do is have a relationship with it and let it know that you want the truth... Why not just have a relationship with your friends and let them know that you can handle the truth?by conartist6
3/28/2026 at 4:56:08 PM
Not only that, but subreddits like r/AmITheAsshole are full of AI slop. Both in the comments and in the posts. It's a huge karma mining operation for bots.by maximinus_thrax
3/28/2026 at 6:00:37 PM
This is sort of funny. Given how common it is to spot bots on Reddit now, it seems like they are likely to completely overwhelm the site and drive away most of actual humans.At which point the bots, with all of their karma will be basically worthless.
Kind of extra funny/sad that Reddit’s primary source of income in the past few years appears to be selling training data to AI labs, to train the Models that are powering the bots.
by mikeocool
3/28/2026 at 8:02:45 PM
> At which point the bots, with all of their karma will be basically worthless.Not really, it will still be kind of valuable for influence campaigns, a lot of people don't get it when there is a bit in the other side. Hell, a lot of times, I don't get it.
by RealityVoid
3/28/2026 at 10:44:56 PM
I know a fair number of people “normies” who get some value out of smaller niche Reddit communities — for advice, and things like product recommendations.If suddenly all the posts are coming from bots who are trying push a product or just farm karma, I assume (perhaps naively) that those folks will get a lot less value, and stop showing up — even if they don’t realize it’s bots on the other side of the conversation.
by mikeocool
3/29/2026 at 9:48:27 AM
How do you clearly define a bot?by mlrtime
3/28/2026 at 5:01:37 PM
That can be solved by filtering out any posts made after November 2022.by genidoi
3/28/2026 at 8:56:58 PM
That's not a good solution. We don't use medical textbooks from 20 years go.Strangers from the internet, bot or otherwise, are not your mental coach.
by expedition32
3/29/2026 at 12:07:14 PM
Even before the advent of AI reddit was notorious for obvious bullshit being posted for karma farming. r/aita is even more famous for people making up stories for unknown and known purposes (known in the old days as "bait").by bombcar
3/28/2026 at 5:04:57 PM
Plus, there's the disproportionate ratio of posters:commenters:lurkers. The tendency to comment over keeping ones thoughts to themself is a selection bias inofitself.by z3c0
3/29/2026 at 4:04:38 PM
Great insight, didn't thing about it even anecdotally. I was lurking on Reddit since 2008 and finally created an account in 2012 when someone was really 'wrong on the internet' and had to step in.by maximinus_thrax
3/28/2026 at 5:02:56 PM
The upvotes ultimately train the bots, reenforcing the content posted. Even the most passive form of interaction has been co-opted for AI.by thwarted
3/28/2026 at 5:05:55 PM
> This needs to be studied against people in real life who have a social contract of some sort... IME, LLMs will shoot holes in your ideas and it will efficiently do so.The Krafton / Subnatuica 2 lawsuit paints a very different picture. Because "ignored legal advice" and "followed the LLM" was a choice. Do you think someone who has conversation where "conviction" and "feelings" are the arbiters of choice are going to buy into the LLM push back, or push it to give a contrived outcome?
The LLM lacks will, it's more or less a debate team member and can be pushed into arguing any stance you want it to take.
by zer00eyz
3/29/2026 at 12:01:22 AM
You could think of what they did in the first study as constructing an exam to test how well various LLM's do as an advice columnist. They wanted a lot of personal advice questions where the LLM should not affirm by default. If a few questions with wrong answers got in there, it probably wouldn't affect the results all that much?Unfortunately they didn't test anything newer than GPT4o, so we don't know how much GPT-5 improved. It would be nice if someone turn their list of questions into a benchmark.
by skybrian
3/29/2026 at 5:51:08 PM
They actually did test GPT-5: https://www.science.org/doi/10.1126/science.aec8352 (see the figure under Conclusion). Its rate of endorsement of user action, 52%, was the same as GPT-4o. So based on their setup it seems that the newer model didn't reduce affirmation.by n_bhavikatti
3/29/2026 at 4:22:54 PM
AITA is one of the few subreddits which is studied often.I wouldn’t say it’s great, but more that it makes clear the bell curve of collective accuracy online.
It’s one of the better examples of online communities that work.
Dismissing research because one part of the prompt set comes from AITA is a form of prejudice born out of unawareness.
by intended
3/29/2026 at 9:09:47 PM
I think it highly depends how you ask the question.When asking: "Should I do X?" or "Is it true that X does Y?"; the answer is always biased towards yes imo (although it was worse with earlier LLMs)
by randomNumber7
3/29/2026 at 8:22:19 AM
> It can be very hard to tell a friend something like this, even when asked directly if it is a bad choice. Potentially sacrificing the friendship might not seem worth trying to change their mind.That doesn't seem like much of a friendship imo
by throwaway27448
3/28/2026 at 11:00:58 PM
Doesn't sound like a close friend to me. If I tell them what I really think they may not be a friend? Close may not mean what you think it means.The challenge is that these social choices have a strong stratification effect and those of us who can transit the cultures are statistically rare.
by erikerikson
3/29/2026 at 8:27:10 AM
"This needs to be studied against people in real life who have a social contract of some sort, because that's what the LLM is imitating"What? These models are all trained from books and text that are scraped from the internet. ChatGPT literally used reddit in its training data afaik.
by everyone
3/29/2026 at 5:26:41 AM
> All you need to do ask it directly.What do you mean? Can you give an example?
by geraneum
3/29/2026 at 7:56:33 AM
“Don’t be a sycophant, give it to me straight”“Argue against X”
by wiseowise
3/30/2026 at 2:19:05 AM
But how’s that helpful? What the affirmation is correct occasionally and following you instructions it just rejects something where it shouldn’t? How would you know?by geraneum
3/29/2026 at 11:51:25 AM
The issue is it will follow your instructions. It's sycophancy one step removed.by svara
3/29/2026 at 3:53:14 AM
> This needs to be studied against people in real life who have a social contract of some sort, because that's what the LLM is imitatingCitation needed
by justonceokay
3/28/2026 at 5:22:18 PM
What's your research background in this area?by 4ndrewl
3/29/2026 at 3:36:35 AM
How is that relevant. A decent scientist can critique general design aspects of a paper in any field. They're hardly splitting hairs on some niche topic.by jiggunjer
3/29/2026 at 11:33:11 AM
Apologies, I didn't realise they were a decent scientist.by 4ndrewl
3/29/2026 at 1:32:46 PM
[dead]by diablevv