3/1/2026 at 2:25:35 PM
Its worth mentioning that this essay has some signs of being either partially AI generated or heavily edited through an LLM. Some of the signs are there (It's not X, it's Y), With the blog having gone from nearly zero activity between 2015 and 2025 to have it explode in posts and text output since then also raises an eyebrow.by Spide_r
3/1/2026 at 2:28:27 PM
It's now almost certain that every submission about LLMs will be written (or assisted) by LLMs.That this kind of writing puts a great number of us off is not important to many who seek their fortune in this industry.
I hear the cry: "it's my own words the LLM just assisted me". Yes we have to write prompts.
by thinkingemote
3/1/2026 at 3:57:52 PM
My current policy on this is that if text expresses opinions or has "I" pronouns attached to it then it's written by me. I don't let LLMs speak for me in this way.I'll let an LLM update code documentation or even write a README for my project but I'll edit that to ensure it doesn't express opinions or say things like "This is designed to help make code easier to maintain" - because that's an expression of a rationale that the LLM just made up.
I use LLMs to proofread text I publish on my blog. I just shared my current prompt for that here: https://simonwillison.net/guides/agentic-engineering-pattern...
by simonw
3/1/2026 at 7:44:33 PM
my policy as wellby satisfice
3/1/2026 at 3:14:17 PM
I think it is very fair to say that in the same way that LLM's have given english majors access to programming, LLMs have also given engineers access to clear communication.I'm not shy to admit that LLMs even from 2 years ago could communicate ideas much better than me, especially for a general audience.
by WarmWash
3/1/2026 at 3:54:19 PM
It’s not “clear communication” though. The prose that comes out of LLMs is awful - long, vapid paragraphs with distracting tropes. You can ask them to be concise but then they file down all the wrong bits of the sentence and lose meaning. There’s a reason people bother clocking it and complaining about it, it’s *bad*It’s like everything else that AI can do - looks fine at a glance, or to the inexperienced, but collapses under scrutiny. (By your own admission you’re not a great communicator… how can you tell then?)
by wibbily
3/1/2026 at 4:31:52 PM
>By your own admission you’re not a great communicator… how can you tell then?Thankfully we don't have to know how to write well to enjoy a well written book.
by WarmWash
3/1/2026 at 3:57:44 PM
> LLMs have also given engineers access to clear communication.A lot of the time, the inability to express an idea clearly hints at some problem with the underlying idea, or in one's conceptualisation of that idea. Writing is a fantastic way to grapple with those issues, and iron out better and clearer iterations of ideas (or one's understanding thereof).
An LLM, on the other hand, will happily spit out a coherent piece of writing defending any nonsense idea you throw at it. Nothing is learnt, nothing is gained from such "writing" (for either the author or the audience).
by troad
3/1/2026 at 5:41:53 PM
Recently read a tweet suggesting to ask an llm to defend a position you know to be false. It's quite eye opening. I mean, it shouldn't be, if you did debate club etc. Or know how lawyers and politicians work. But it's quite revealing how it can piece together a good defense, selectively quoting real facts, embuing them with undue weight etc to make the thesis stand quite well.by bonoboTP
3/1/2026 at 5:35:45 PM
It's often warping the message or "snapping it to grid", taking off the edge, the unique insight. A lack of clear communication is much more a symptom of unclarity about the intended message, audience, prioritization etc. I don't doubt that you internally have a clear idea but sharing it requires thinking about the intended audience and the diff of their current state of knowledge and doubt and where you want to move their thinking. This is a much bigger part than knowing eloquent vocab and grammar tricks.It doesn't come naturally to the more introverted type of person who cares about the object level problem and not whatever anyone else may know or doubt, I'll admit this. But slapping LLMs on it is not a great solution.
by bonoboTP
3/1/2026 at 2:53:01 PM
As someone who has written a few deeply personal articles with LLM assistance, I see the signs and I'm almost certain this was generated off a few bullet points. The repetition and cadence strongly resembles the LLM output. Its the kind of fluff that I remove from a piece, because it lacks humanity and offers little substance.by rcvassallo83
3/1/2026 at 5:26:26 PM
The comments as well. I won't give away the tells but HN is less and less pleasant to read. Now is the time to cherish your pockets of small scale high quality forums that's not flooded by this stuff yet.by bonoboTP
3/1/2026 at 5:38:32 PM
How do you find those pockets?by alex_suzuki
3/1/2026 at 5:53:38 PM
I guess talking to people and making friends helps. Online, maybe seek out discords and befriend people and they may tell you. Not unlike how you find cool underground clubs.by bonoboTP
3/1/2026 at 6:49:59 PM
I do this but it mainly leads to a lot of 1-to-1 conversations, which is fine, but a wider but still “curated” audience would be interesting.by alex_suzuki
3/1/2026 at 2:31:07 PM
Even the title has that unmistakable smell of punchy LinkedIn profundity.by marginalia_nu
3/1/2026 at 2:56:24 PM
Even the linkedin profile has a studio-ghibli-style avatar. People are going to assume that he is just an "analog interface" to an LLM. Which is sad, because he might be a good programmer. In fact, I tend to see a lot of english-as-second-language people embrace LLMs as a kind of "equalizer", not realizing that in 2026 it is the opposite (not saying that it's right either way, just pointing out that it is becoming a kind of anti-marketing, like showing up to a conference without any clothing, and getting banned from the conference permanently).We should probably normalize publishing things in our native languages, and expecting the audience to run it through a translator. (I have been toying with the idea of writing everything in Esperanto (not my native language, but a favorite) and just posting links to auto-translated English versions where the translation is good enough).
EDIT: as someone with friends and family from Eastern Europe, I can tell you that the prevailing attitude is: "everything is bullshit anyway" (which, to be fair, has a lot of truth to it), and so it is no surprise that people would enthusiastically embrace a pocket-sized bullshit factory, hook it up to a fire-hose, and start spraying. We saw it with spam, and we see it now with slop. It won't stop unless the system stops rewarding it.
by nz
3/1/2026 at 2:59:50 PM
This was my thought after getting through a few paragraphs as well. At first, I was thinking, this is interesting, maybe worth sharing with colleagues. But then it became too obvious it was AI written or "assisted". Can't take that seriously.by jmcdl
3/1/2026 at 3:01:35 PM
AI made writing words easier. It made communicating well harder.by neogodless
3/1/2026 at 2:59:12 PM
AI made writing blog posts easier. It made critical thinking harder.by brobdingnagians
3/1/2026 at 4:28:02 PM
LLMs write this way because people write this way. Maybe not everyone, but enough for it to train the models to do it. Much of my writing reads like an LLM wrote it, but that doesn't make me an LLM.by RevEng
3/1/2026 at 6:37:13 PM
Yes and no. LLMs take all the writing on the Internet (good and bad) and average it out. It's similar to the way generative AI images always have an identifiable, artificial "look". They've averaged out the personality and thereby erased the individuality that went into the efforts the original artists used to create them.by timmytokyo
3/1/2026 at 7:14:16 PM
> Much of my writing reads like an LLM wrote it,I doubt it; share something you wrote prior to, say... 2024.
by lelanthran
3/1/2026 at 5:25:37 PM
Why is this sentiment expressed so often ("It was written/edited by AI"?It seems to bother people, perhaps since it may have been low-effort. Doesn't it not matter as long as the content is good? Otherwise, it seems to be no different than a standard low-quality post.
by apt-apt-apt-apt
3/1/2026 at 6:06:36 PM
The formulaic style/cadence/structure/tone is annoying, for one due to its LLM-induced prevalence, but also because it is padded and stretched without adding substance while being dyed in superficialities, and has a weird tendency of meandering through its thematic territory, like the author was slightly distracted or is writing the same thing for the 20th time, or is missing a good editor. Pre-LLM, it might have been an okay-ish, but not great, article. Now it’s just grating and makes you feel like you’re wasting your time reading it.by layer8
3/1/2026 at 7:16:31 PM
> Doesn't it not matter as long as the content is good?"Why is everyone railing against my spam? Doesn't it not matter as long as the deal I am offering is good?"
When people don't want the spam, it is irrelevant whether the spammer is offering a good deal or not.
by lelanthran
3/1/2026 at 5:29:53 PM
When I want to read Ai writing (which is not never), I chat with it myself and I prompt it better and get more interesting stuff than these generic insight blogspam.by bonoboTP
3/1/2026 at 6:18:34 PM
LLM prose is typically _painful_ to read, overly long, and bullshit-heavy.by rsynnott
3/1/2026 at 2:36:50 PM
I couldn’t even finish it. I picked up on it after reading the other one that made it to the front page the other day.I don’t think there will be a point in coming to this site if it’s just going to be slop on the front page all the time.
Maybe mods should consider a tag or flag for AI generated content submissions?
by agentultra
3/1/2026 at 7:16:06 PM
AI writings should be notifiedby lsc4719
3/1/2026 at 2:29:58 PM
It is almost 90% generated using AI text. So many paragraphs to say basically nothing at all.Like look at this paragraph:
> Junior engineers have traditionally learned by doing the simpler, more task-oriented work. Fixing small bugs. Writing straightforward features. Implementing well-defined tickets. This hands-on work built the foundational understanding that eventually allowed them to take on more complex challenges.
The first sentence was enough to convey everything you needed to know, but it kept on adding words in that AI cadence. The entire post is filled with this style of writing, which, even if it is not AI, is extremely annoying to read.
by altmanaltman
3/1/2026 at 2:37:20 PM
What would he have written instead?by m00dy
3/1/2026 at 2:46:18 PM
My point is that there's nothing to be written there "instead", it just is not needed text that is added to make the text longer, typical of AI writing that parrots the same points over and over to make up for word count.Here's another example from the blog:
> Here is something that gets lost in all the excitement about AI productivity: most software engineers became engineers because they love writing code.
> Not managing code. Not reviewing code. Not supervising systems that produce code. Writing it. The act of thinking through a problem, designing a solution, and expressing it precisely in a language that makes a machine do exactly what you intended. That is what drew most of us to this profession. It is a creative act, a form of craftsmanship, and for many engineers, the most satisfying part of their day.
can just be:
> Most software engineers became engineers because they love writing code. It is a creative act, a form of craftsmanship, and for many engineers, the most satisfying part of their day.
Clarity is something that is taught in every writing class but AI generated text always seems to have this weird cadance as follows: The sound is loud. Not a whimper, not a roar, a simple sound that is very loud. And that's why... blah blah blah.
You have to care about your readers if you're writing something seriously. Throwing just a bunch of text that all mean the same thing in your writing is one of the bigger sins you can do, and that's why most people hate reading AI writing.
by altmanaltman
3/1/2026 at 3:15:13 PM
I don't know...The part you'd like to remove ("Not managing code...") may be not required to convey the objective meaning of the sentence, but humans have emotions, too. I could have written stuff like that. To build up a bigger emotional picture.
> The act of thinking through a problem, designing a solution, and expressing it precisely in a language that makes a machine do exactly what you intended.
This sentence may not be relevant for whatever you experience to be the relevant message of the text. But it still says something the remaining paragraph does not. And also something I can relate to.
Also, as LLMs are statistical models, one has to assume that they write like this because their training data tells them to. Because humans write like this. Not when they do professional writing maybe, but when they just ramble. Not all blogs are written by professionals. I'd say most aren't. LLM training data consists mostly of humans rambling.
I also sometimes write long comments on the internet. And while I have no example to check, I feel like I do write such sentences, expanding on details to express more emotional context. Because I'm not a robot and I like writing a lot. I think it's a perfectly human thing to do. I find it sad that "writing more than absolutely needed" is now regarded as a sign of AI writing.
by wolletd
3/1/2026 at 7:19:45 PM
> Because humans write like this. Not when they do professional writing maybe, but when they just ramble.I keep seeing this assertion and I keep responding "Please, point to the volume of writing with this specific cadence that has a date prior to 2024" and I keep getting... crickets!
You're asserting that this is a common way for humans to write, correct? Should be pretty easy, then, to find a large volume of examples.
by lelanthran
3/1/2026 at 9:50:53 PM
Like I said: I think I write like this on some occasions.I wouldn't know how I would search for examples. I guess you'd have to search old reddit comment threads or something. But yeah, I have no motivation to do that, tbh. It could be that it's hard to find examples because they are scattered about in countless comment threads and single posts on countless platforms. Things I rarely keep links to, things nobody indexed on a large scale before LLMs.
It may be that it wasn't a very popular style of writing, because most people don't like writing a lot and keep their texts on the internet short. LLMs exaggerate this style because they generate exaggerative amounts of text in general. The style wasn't particularly annoying in the past because it wasn't that popular. It's annoying now because LLMs flood the internet with it.
The quoted example in particular didn't appear uncanny to me. And it still doesn't. I can see myself writing like that. I'm sorry I have no example for you. But I'm genuinely unsure whether I'm oblivious to the patterns others see, or whether others see patterns because they want to see them.
by wolletd
3/1/2026 at 3:02:30 PM
One of the good book about writing I read was William Zinsser's "On Writing Well". Striving for simplicity and avoiding clutter was the two first principles described in the book. AI writing feels more like ramblings than communication.by skydhash
3/1/2026 at 3:33:01 PM
Out of curiosity, how do you feel about florid and elaborate writing (e.g. Faulkner, Lispector, Mieville, Mossman, Joyce, Austen, etc)?by nz
3/1/2026 at 3:42:34 PM
I do not think Faulkner would write very good C++ library documentation.I would read the hell out of Joyce’s Perl 5 documentation, but only after six or seven beers.
by addaon
3/1/2026 at 6:20:18 PM
There's an art to it. Most human attempts, and every LLM attempt I've ever seen, are awful, sometimes bordering on unreadable, but, as you say, there are a relatively small number of authors who do it well. That doesn't mean that most people should do it.by rsynnott
3/1/2026 at 4:28:23 PM
I'm a French speaker and florid and elaborate writing is something I've grown up with. It can be difficult if you don't know the word or are not used to the style, but it's not boring. AI writing is just repetitive.by skydhash
3/1/2026 at 3:50:47 PM
When I've used AI for proofreading the suggestions it makes to me is to cut a lot and shorten it. It also gives me examples, never with my voice or style though.by tayo42
3/1/2026 at 4:11:16 PM
Classic LLM construction.5 sentence paragraph. First sentence is parataxis claim. Followed by 3 examples in sentence fragments, missing verbs, that familiar cadence. Then the final sentence, in this case also missing a verb.
Pure AI slop.
by polynomial
3/1/2026 at 2:30:17 PM
I feel like it's such a lack of self respect and respect for others when people write using AI on personal blogs.Reading AI code is very pleasant. It's well annotated and consistent - how I like to read code (although not how I write code LOL). Reading language/opinions is not meant to be this way. It becomes repetitive, boring, and feels super derivative. Why would you turn the main way we communicate with each other into a soulless, tedious, chore?
I think with coding it's because I care* about what the robot is doing. But, with communication, I care about what the person is thinking in their mind, not through the interpretation of the robot. Even if the person's mind isn't as strong. At least then I can size the person up - which is the other reason understanding each other is important and ruined when you put a robot in between.
by SecretDreams
3/1/2026 at 4:18:25 PM
It's also because we (generally) consider a blog to be human communication and we consider math and programs to be something else.If you're talking to someone on the phone and halfway through they identify themselves as a bot, surprising you, there's a profound sense of something like betrayal. A moment ago you were having a human connection, and suddenly that vaporized. You were misled and were just talking to an unfeeling robot.
And heartfelt writing is similar. We imagine the human at the other side of the screen and we relate. And when we discover it was a bot, no matter how accurate the sentiment, that relationship vanishes.
But with math and software, it's already sterile from a human connection perspective. It's there for a different purpose. Yes, it can be beautiful, but when we read it we don't tend to build a human connection with the coder.
An interesting exception is comments. When we read the fast inverse square root code and see the "what the fuck..." comment, we instantly relate to the person writing the software. If we later learned that comment was generated by an LLM, we'd lose that connection, again.
IMHO. :)
by beej71
3/1/2026 at 5:22:54 PM
Totally agree. I'll extend this to email and slacks, too. I cannot stand getting AI written slop from fellow co-workers because they couldn't write the message themselves. Do not even bother to engage with me if you need to put your thoughts through an AI first. It won't go well. People gotta work on themselves a lot more and I think they're using AI to do the opposite.by SecretDreams
3/1/2026 at 7:22:46 PM
> I feel like it's such a lack of self respect and respect for others when people write using AI on personal blogs.Not so sure about the respect aspect: I have lots of self-respect, but I don't generally broadcast respect for random other people when I write my blogs - the most recent one even called readers stupid, IIRC!
I feel it's more a matter of expression of contempt: if you can't be bothered to write it, WTF are you expecting people to read it?
by lelanthran
3/1/2026 at 5:55:41 PM
Yeah the article is 100% AI generated according to Pangramby 383toast
3/1/2026 at 3:26:52 PM
It's funny how seemingly easy it is to tell articles like this have that AI generated whiff to them. The first bit that raised my suspicion was the "The Identity Crisis Nobody Talks About" headline. This "The x nobody talks about" feels like such a GenAI thing.I hate it. I couldn't read much more after that.
by dom96
3/1/2026 at 3:45:32 PM
[dead]by lezojeda
3/1/2026 at 3:48:47 PM
[dead]by jordanekay