2/26/2026 at 1:25:57 PM
I find OP's communication style abrasive and off-putting, which tracks with them saying they've been coached on this, and found that advice lacking.Maybe it's still insufficient advice, but it hasn't worked for them at least in part because they haven't figured out how to apply it.
From the post, I see low empathy and an air of superiority, (perhaps earned by genuinely being smarter than their peers-- doesn't make it more attractive).
That's going to cause friction because a team is a _social_ construct.
by james_marks
2/26/2026 at 1:58:00 PM
That's because it was generated by an LLM.by mmsc
2/26/2026 at 2:01:35 PM
I simply cannot believe people in this post are discussing this as anything other than a complete bot job. Pure clanker vomit.by Rapzid
2/26/2026 at 2:15:38 PM
I realize it's been "written" by an LLM, but the content could have been written by someone I know. It's eerie how this person thinks exactly the same way. It's never their fault, always the others', and they are always obviously right and no amount of arguing can change their mind.by sjamaan
2/26/2026 at 2:44:37 PM
"Write an essay about struggling to change a software org that doesn't want to change. Make me the hero. Post it at 1am so it looks like I was up late suffering with the burden of what I know."This is unfortunately the world we are in now.
by Rapzid
2/26/2026 at 2:41:03 PM
This is not a politically correct thing to say but there is a class of neurodiverse software developers who display these characteristics and I suspect the author belongs to this group.Frankly, reminds me of Michael O'Church
by JCDenton2052
2/26/2026 at 1:55:44 PM
Yeah, a lot of the examples made me think "wait, there's something else going on there, right?", which would make sense if the author has difficulty communicating or negotiating their proposals.In the first example, for example, they suggested a new metric to track added warnings in the build, and then there was a disagreement in the team, and then as a footnote someone went and fixed the warnings anyway? That sounds like the author might be missing something from their story.
by MrJohz
2/26/2026 at 2:16:37 PM
> In the first example, for example, they suggested a new metric to track added warnings in the build, and then there was a disagreement in the team, and then as a footnote someone went and fixed the warnings anyway? That sounds like the author might be missing something from their story.I do not find anything missing here. This is how things often plays out in reality. Both your retelling of it and what was actually written in the article.
Your retelling: Some people agree and some disagree with new metric. That is completely normal. Then someone who agree or want to achieve the peace or just temporary does not feel like doing "real jira" tasks fixes warnings. Team moves on.
Actual article: the warnings get solved when it becomes apparent one of them caused production issue. That is when "this new process step matters" side wins.
by watwut
2/26/2026 at 6:27:25 PM
I'm referencing the footnote where the author says that the discussion caused one team member to go and fix the issue. The warnings causing a production issue is, I think, a complete hypothetical.What this story is missing is an explanation for why people were disagreeing. Like, why is someone not looking at warnings? Is it that the warnings are less important than the author understands? Is it that the warnings come from something that the team have little control over? And the solution the author suggests - would it really have changed anything if they already weren't looking at warnings? The author writes as if their proposal would have fixed things, but that's not really clear to me, because it's basically just a view into whether the problem is getting worse, which can be ignored just as easily as the problem itself.
by MrJohz
2/26/2026 at 7:44:36 PM
Someone hacked his site or something, so I cant get back. But, I thought you mean situation in one of the first paragraphs where the team started take some issue seriously after actual problem.And honestly, I have seen people disagree and fight literal standard changes like "lets have pipeline that runs tests before merge" or "database change must go through test environment before being sent over".
It is perfertly possible and normal for people to fight change and be wrong without there being grave smart missing reason. I have no problem to m trust the author that he was simply right in hindsight.
If you ever tried to improve processes or project with persistent issues, the problems author described are entirely believable. The author does not know what to do in that situation, but he described the usual dynamic pretty accurately.
by watwut
2/26/2026 at 1:33:10 PM
The first two sentences> Organizations don't optimize for correctness. They optimize for comfort
...do I need to say it?
by armchairhacker
2/26/2026 at 1:47:19 PM
> One number, never measured before. It doesn't change rules or add warnings, just makes the existing count visible.Stopped here. That pattern.
I recognize this pattern from this AI "companion" my mate showed me over Christmas. It told a bunch of crazy stories using this "seize the day" vibe.
It had an animated, anthropomorphized animal avatar. And that animal was an f'ing RACCOON.
by Rapzid
2/26/2026 at 2:26:20 PM
LLMs originally learned these patterns from LinkedIn and the “$1000 for my newsletter” SEO pillions. Both accomplish a goal. Now that's become a loop.There is a delayed but direct association between RLHF results we see in LLM responses and volume of LinkedIn-spiration generated by humans disrupting ${trend.hireable} from coffee shops and couches.
// from my couch above a coffee shop, disrupting cowork on HN. no avatars. no stories. just skills.md
by Terretta
2/26/2026 at 1:52:37 PM
You are absolutely right!- It is not X. It is Y.
- X [negate action] Y. X [action] Z.
The titles are giveaways too: Comfort Over Correctness, Consensus As Veto, The Nuance, Responsibility Without Authority, What Changes It. Has that bot taste.
If you want I can compile a list of cases where this doesn't happen. Do you want me to do that?
by bossyTeacher
2/26/2026 at 2:07:07 PM
As someone who thinks very much like TFA, I often write like that. I swear I'm not a bot.by sgarland
2/26/2026 at 2:08:42 PM
Maybe fix your writing then. This is not good writing.by grey-area
2/26/2026 at 2:31:28 PM
Neither is Vonnegut's (which your short, choppy sentences reminded me of), but he was a very successful and beloved author. I'm in no way comparing myself to Vonnegut, but my point is just because it doesn't appeal to you, it doesn't mean it isn't good.Writing is art. Does it get the intended point across? Does it resonate with the reader? Does it make them feel something? Then it is good.
by sgarland
2/26/2026 at 3:31:05 PM
I disagree on Vonnegut. Most human authors at least have a voice, even if you don't like it it's recognisable and theirs, and I would rarely think to criticise that, it makes the writing come alive. If you truly write like an LLM (there is little evidence here of that) it would not be the same.LLMs serve up a sort of bland pap with sugary highs of excitement which resembles a cross between manic advertising copy and a breathless teenager who's just discovered whatever subject they're talking about. They also sometimes confabulate and generate text which is at best tangential and at worst completely misleading.
It's exhausting and if you haven't carefully read what they generate (which most people clearly have not), you should not expect another human to read it.
Just as an interesting taste, here is my copy above rewritten to sound even more EXCITING and ENGAGING.
"They deliver a horrifying concoction – a sickly sweet, manufactured echo of thought, a grotesque blend of relentless advertising whispers and the manic, unearned enthusiasm of a teenager just discovering a world they don't understand! But the truly chilling thing is this: they fabricate. They weave elaborate lies, constructing text that’s not just tangential, but actively, dangerously misleading!
It’s a psychic assault, a draining vortex of intellectual despair! And if you haven’t wrestled with every single word, dissected it, exposed its flaws – and frankly, I suspect most haven’t – then don’t dare expect anyone else to salvage this wreckage! This is not a passive observation; it’s a desperate plea against a future where genuine thought is suffocated by the cold, sterile logic of a machine! We must guard against this, or we risk losing everything!” -- gemma3:4b
by grey-area
2/26/2026 at 5:22:25 PM
I don't disagree with your take on what how LLM copy is awful; I just disagree that this was written by an LLM. For example, this paragraph at the end:> If you're in this position (relied upon, validated, powerless), you're not imagining it. And it's not a communication problem. "Just communicate better" is the advice equivalent of "have you tried not being depressed?"
I've seen "you're not imagining it" countless times from LLMs, but always as the leading sentence in the paragraph; for something like the above, they tend to use em-dashes, not parentheses.
FWIW, Grammarly's AI Detector thinks that 17% of it resembles LLM output, and ZeroGPT thinks that 4.5% of it resembles LLM output.
by sgarland
2/26/2026 at 9:30:41 PM
Your comments don't read like LLM-slop to me.An occasional "it's not X, it's Y", rule of three, or em-dash isn't atypical nor intrinsically bad writing. LLM-slop stands out because of the frequency of those and other subliminal cues. And LLM-slop is bad writing, at least to me, because:
- It's not unique (like how generic art is bad compared to distinct artstyles)
- It's faux-authentic ("how do you do, fellow kids?")
- It's extremely shallow in information. Phrases like "here's the kicker" and "let that sink in" are wasted words
- The meaning is "fuzzy". It's hard to describe, but connotations and figurative language are "off" (inconsistent to the larger idea? Like they were picked randomly from a subset of acceptable candidates...); so I can't get information from them, and it's hard to form in my mind what the LLM is trying to convey (perhaps because the words didn't come from a human mind)
- It doesn't always have good organization: some parts seem to go on and on, high-level ideas drift, and occasionally previous points are contradicted. But I suspect a plan+write process would significantly reduce these issues
by armchairhacker
2/26/2026 at 2:23:52 PM
It used to be. That's why LLMs adopted it. How do you think they got their preferences? A Magic 8 Ball?by quotemstr
2/26/2026 at 3:18:23 PM
It was okay writing in the context of marketing. A normal person never wrote like that.by dw_arthur
2/26/2026 at 2:59:19 PM
Why is it bad writing?by watwut
2/26/2026 at 2:15:04 PM
> I find OP's communication style abrasive and off-puttingYour comment is hilarious on a meta-level: it's an example of exactly the sort of socially-mediated gatekeeping the author of the article (machine or human, I don't care) criticizes. It is, in fact, essential to match authority and responsibility to achieve excellence in any endeavor, and it's a truth universally acknowledged that vague consensus requirements are tools socially adept cowards use to undermine excellence.
Competent dictatorship is effective. Look at how much progress Python made under GVR. People who rail against hierarchy and authority, even when deployed correctly, are exactly the sort of people who should be nowhere near anything that requires progress.
Imagine running a military campaign by seeking consensus among the soldiers.
by quotemstr
2/26/2026 at 3:31:05 PM
Consensus works in a Democracy because the best thing the government can do to help people is usually nothing.by mapontosevenths
2/26/2026 at 2:42:15 PM
> Look at how much progress Python made under GVR.Or, you know, Linus Fucking Torvalds. If you were carrying the success or failure of most of the world's digital infrastructure on your shoulders, you also might be grating to some.
by sgarland