alt.hn

3/30/2026 at 12:39:56 PM

Do your own writing

https://alexhwoods.com/dont-let-ai-write-for-you/

by karimf

3/30/2026 at 7:16:59 PM

I've long considered writing to be the "last step in thinking". I can't tell you how many times an idea, that was crystal clear in my mind, fell apart the moment I started writing and I realize there were major contradictions I needed to resolve. Likewise I also have numerous times where writing about something loosely and casually revealed to me something that fundamentally changed how I viewed a topic and really consolidated my thinking.

However, there is a lot of writing that is basically just an old school from of context engineering. While I would love to think that a PRD is a place to think through ideas, I think many of us have encountered situations, pre-AI, where PRDs were basically context dumps without any real planning or thought.

For these cases, I think we should just drop the premise altogether that you're writing. If you need to write a proposal for something as a matter of ritual, give it AI. If you're documenting a feature to remember context only (and not really explain the larger abstract principles driving it), it's better created as context for an LLM to consume.

Not long ago my engineering team was trying to enforce writing release notes so people could be aware of breaking changes, then people groaned at the idea of having to read this. The obvious best solution is to have your agent write release notes for your agent in the future to have context. No more tedious writing or reading, but also no missing context.

I think it's going to be awhile before the full impact of AI really works it's way through how we work. In the mean time we'll continue to have AI written content fed back into AI and then sent back to someone else (when this could all be a more optimized, closed loop).

by roadside_picnic

3/31/2026 at 5:22:39 AM

> I can't tell you how many times an idea, that was crystal clear in my mind, fell apart the moment I started writing and I realize there were major contradictions I needed to resolve.

perhaps because writing is a third order exercise.

first order being in thinking in one's mind, one has to talk with oneself. second order is talking to someone else we are directing thoughts towards that person, but in writing we have to imagine the reader and then write.

https://alandix.com/academic/papers/writing-third-order-2006...

by the-mitr

3/31/2026 at 5:39:39 AM

I think writing is writing to an audience which includes yourself.

When you're thinking you are speaking in your mind which means you can not really listen to yourself at that same time. You don't hear yourself from yourself. You are too busy talking (in your head to yourself) that you can not really think about what you just said to yourself. You are producing language, not consuming it

But when you read what you have written, you can pause reading and do some thinking about what you just read. That makes it easier to understand what you are saying, and more easily see logical errors or omissions in it.

by galaxyLogic

3/31/2026 at 2:26:40 PM

I think this is correct. I told a coworker that when I edit my email drafts they get shorter. He was surprised and said that his get longer. I trim and refine. Sure, I add details that I missed at first. But I also create better structure and remove ambiguity or unnecessary words.

Yesterday, I was working on an email for someone who I was trying very hard not to overwhelm with technical details. I cut it roughly in half in terms of words, but I also turned paragraphs in single lines of sequenced steps or concise statements without decorating the text with unneeded aphorisms / commentary.

I was pretty pleased with the end result. This is only possible because of careful rereading and reflection (including knowing my intended audience). I imagine an LLM can approximate this, but I don't trust one to craft with the same level of care. Then again, we all think we're better than the robots at the things we care about most.

I understand the urge to throw mechanical writing at the bots. But a human will grasp the need to add a detail explaining the why of something when (the current) bots gloss over it. There's still nuance worth preserving.

by alsetmusic

3/31/2026 at 8:05:58 AM

> That makes it easier to understand what you are saying, and more easily see logical errors or omissions in it.

Rubber ducking with a pencil, kinda.

by zimpenfish

3/31/2026 at 4:44:25 AM

So often have I started writing a commit message about why I’d done something this way, and realised a problem or thought of another approach, and ended up throwing away the entire change and starting from scratch.

(Aside: you should probably write longer commit messages.)

by chrismorgan

3/31/2026 at 5:07:42 AM

Corollary: write the commit message first, implement things later! Not a joke, this is almost like TDD works. (TDD writes formal tests, which is much more involved.)

by nine_k

3/31/2026 at 9:09:06 AM

Documentation-driven development.

by teddyh

3/31/2026 at 1:24:58 PM

Second this. After having a chat with some coworkers, I've been attempting to write more thoughtful and longer commit messages. I had this exact experience yesterday after I had staged my commit and was writing the message. I realized that there was a better way to do this change and redid the whole thing.

by sudo_rm

3/31/2026 at 2:29:51 AM

Im in a slight disagreement with our CTO about the value of writing acceptance criteria yourself. When I write my own acceptance criteria its a useful tool forcing me to think through how the system ought to work. Definitely in agreement that writing is an important tool for clarifying thinking, not just generating context.

by gburgett

3/31/2026 at 1:14:52 AM

>I've long considered writing to be the "last step in thinking". I can't tell you how many times an idea, that was crystal clear in my mind, fell apart the moment I started writing and I realize there were major contradictions I needed to resolve. Likewise I also have numerous times where writing about something loosely and casually revealed to me something that fundamentally changed how I viewed a topic and really consolidated my thinking.

I read somewhere that Thinking, Writing and Speaking engage different parts of your brain. Whatever the mechanism, I often resolve issues midway while writing a report on them.

by protocolture

3/31/2026 at 1:27:18 AM

I noticed that when speaking on a subject I tend to explain it in simple terms, but when writing, I tend to get bogged down in details, pedantry and technical language.

I started publishing my writing recently and I too often fall back into "debugging my mental model" mode, which while extremely valuable for me, doesn't make for very good reading.

I guess the optimal sequence would be to spend a few sessions writing privately on a subject, to build a solid mental model, then record a few talks to learn to communicate it well.

-- Similarly, journaling on paper and with voice memos seems to give me a different perspective on the same problem.

by andai

3/31/2026 at 5:23:23 AM

I do this too, and i find this a great use for my LLMs. I write the full detail as a part of integrating. Claude or Copilot helps me craft the communication versions.

by ohthehugemanate

3/31/2026 at 6:00:15 PM

> I've long considered writing to be the "last step in thinking". I can't tell you how many times an idea, that was crystal clear in my mind, fell apart the moment I started writing and I realize there were major contradictions I needed to resolve. Likewise I also have numerous times where writing about something loosely and casually revealed to me something that fundamentally changed how I viewed a topic and really consolidated my thinking.

I've found that I instinctively try to work around this by thinking in an explicit inner voice; i.e. I imagine that I can hear my thoughts put into words and spoken to myself. (Actually speaking aloud is somehow too embarrassing, even if nobody is around to hear.)

It still doesn't quite seem to work as well as actually writing.

by zahlman

3/31/2026 at 11:42:21 AM

The Feynman method of solving problems puts a similar emphasis on writing:

1. Write down the problem

2. Think really hard

3. Write down the solution

It's a bit tongue-in-cheek, but there is also truth to it. Step 1 is not optional and actually very important.

by mr_mitm

3/30/2026 at 11:55:34 PM

nicely distilled.

however the education system has done a disservice of how critical thinking actually happens.

when you write - then try edit your thoughts (written material). the editing part helps you clarify things, bring truth to power ie. whether you're bullshitting yourself and want to continue or choose another path.

the other part - in a world of answers - critical thinking is a result of asking better questions.

writing helps one to ask better questions.

preferably if you write in a dialogue style.

by dzonga

3/31/2026 at 1:03:14 AM

If you drop the premise of writing, drop the premise that you need something well written. Just give me the same information you would have given the LLM.

by ori_b

3/31/2026 at 1:42:40 AM

But a non well-written prompt is not a good prompt. What are you really going to do with a shit prompt? It's meta: we need better writers all the way down.

by apsurd

3/31/2026 at 7:07:22 AM

Whatever the prompt is, it is still the only information of value reflecting actual decisions made.

Everything coming out of LLM on any prompt is either someone else's decisions or same thing reworded in a different way.

by necovek

3/31/2026 at 2:13:09 AM

Yes. But if it's good enough for an LLM it's good enough for me.

If you really feel the need, you can attach the LLM output as an appendix. I probably won't read it.

by ori_b

3/31/2026 at 5:03:26 AM

Do you really want five minutes of audio of me rambling, then some instructions for how to split it up and organise it?

Plenty of people make LLMs make text longer, but writing a short accurate text with the essential points is much harder.

by tomjen3

3/31/2026 at 7:01:34 AM

What is the difference between you putting your 5 minute monologue into the LLM to summarize it versus me doing it?

by anon-3988

3/31/2026 at 4:39:14 PM

I know what I'm trying to say, so I can sanity check the output. You can't, unless you listen to the monologue.

That's why I disagree with people that say "just give me whatever you gave the LLM." That's only useful if you, the writer of the prompt, have no intention of looking at the LLM output before sending it.

by antasvara

3/31/2026 at 3:20:07 AM

Do you really want to read the whole conversation between the author and computer? I don't use AI to write prose but if I did I'd treat it like a critical editor so reading all that would not save you time.

by esafak

3/31/2026 at 8:20:23 AM

Any time I stumbled on AI writing; in comments, work or articles, it was painfully obvious that not a single person has read it, including the author.

by yoz-y

3/30/2026 at 9:44:30 PM

For your context, I'm an AI hater, so understand my assumptions as such.

> The obvious best solution is to have your agent write release notes for your agent in the future to have context. No more tedious writing or reading, but also no missing context.

Why is more AI the "obvious" best solution here? If nobody wants to read your release notes, then why write them? And if they're going to slim them down with their AI anyway, then why not leave them terse?

It sounds like you're just handwaving at a problem and saying "that's where the AI would go" when really that problem is much better solved without AI if you put a little more thought into it.

by delusional

3/31/2026 at 8:23:54 AM

For release notes in particular, I think AI can have value. This is because more than a summary, release notes are a translation from code and more or less accurate summaries into prose.

AI is good at translation, and in this case it can have all the required context.

Plus it can be costly (time and tokens) to both “prompt it yourself” or read the code and all commit logs.

by yoz-y

3/31/2026 at 7:11:20 AM

To me, the value I would look to extract feom LLMs is turn the code changes into user-readable, concise release notes.

If you are coding with the help of LLMs, then release notes are your human-crafted prompt.

Basically, the intent is given as a decision somewhere, and that is human driven.

by necovek

3/31/2026 at 12:01:08 AM

What better solution do you have in mind? This scenario is AI being used as a tool to eliminate toil. It’s not replacing human creativity, or anything like that.

If you have a problem with that, then you should also have a problem with computers in general.

But maybe you do have a problem with computers - after all, they regularly eliminate jobs, for example. In that case, AI is only special in its potentially greater effectiveness at doing what computers have always been used to do.

But most of us use computers in various ways even if we have qualms about such things. In practice, the same already applies to AI, and likely will for you too, in future.

by antonvs

3/31/2026 at 1:08:29 AM

It's not eliminating toil, it's externalizing it from the writer to the reader.

If writing something is too tedious for you, at least respect my time as the reader enough to just give me the prompt you used rather than the output.

by bandrami

3/31/2026 at 1:44:11 AM

In a lot of my AI assisted writing, the prompt is an order of magnitude larger than the output.

Prompt: here are 5 websites, 3 articles I wrote, 7 semi-relevant markdown notes, the invitation for the lecture I'm giving, a description of the intended audience, and my personal plan and outline.

Output: draft of a lecture

And then the review, the iteration, feedback loops.

The result is thoroughly a collaboration between me and AI. I am confident that this is getting me past writer blocks, and is helping me build better arcs in my writing and lectures.

The result is also thoroughly what I want to say. If I'm unhappy with parts, then I add more input material, iterate further.

I assure you that I spend hours preparing a 10_min pitch. With AI.

(This comment was produced without AI.)

by specvsimpl

3/31/2026 at 3:07:55 AM

Because it’s not totally clear from your comment: what part are you contributing in this process?

by jimbokun

3/31/2026 at 2:02:49 AM

Great example. Just give me the links you would give to the LLM. I also have an LLM and can use it if I want to, or I can read the links and notes. But I have zero interest in reading or hearing a lecture that you yourself find too tedious to write.

by bandrami

3/31/2026 at 3:02:25 AM

Performative nonsense.

You have less interest in sifting through multiple articles and wiki pages sent to you by a stranger with a prompt than the one paragraph same stranger selected as their curated point.

And pretending like you’d act otherwise is precisely the kind of “anti ai virtue signaling” that serves as a negative mind virus.

AI is full of hype, but the delusion and head in sand reactions are worse by a mile

by aeon_ai

3/31/2026 at 3:16:29 AM

Then let him curate it as his central point. If he finds even that too tedious to do, I absolutely have no interest in reading the output of a program he fed the context to (particularly since I also have access to that program)

by bandrami

3/31/2026 at 3:57:20 PM

> And pretending like you’d act otherwise

No pretending here. I don't ever ask an LLM for a summary of something which I then send to people, because I have more respect for my co-workers than that. Nor do I want their (almost certainly inaccurate) LLM summary. It's the 2020s equivalent of "let me Google that for you": I can ask the bag of words to weigh in myself; if I'm asking a person it's because I want that person's thoughts.

by bigstrat2003

3/31/2026 at 5:55:04 AM

The original comment was saying that the AI would be both the writer now and the reader, in future. That's how the toil is eliminated. Instead of reading or searching through a series of release notes, you can just ask questions about what you're specifically looking for.

> If writing something is too tedious for you, at least respect my time as the reader

"If comprehending something is too tedious for you..."

Seriously, don't jump to indignant rhetoric before you're sure you've understood the discussion.

by antonvs

3/31/2026 at 6:21:23 AM

In this scenario the ai _writer _ is redundant.

You might as well publish the prompt you were going to give to the writer and have the ai reader consume that directly.

Assuming you think any of this is a good idea of course. Personally I wouldn’t trust ai to interpret release notes for anything that i cared about

by funnybeam

3/31/2026 at 8:34:14 AM

I responded to a similar point here: https://news.ycombinator.com/item?id=47584324

The original commenter was essentially describing something similar to what good agent harnesses already rely heavily on.

by antonvs

3/31/2026 at 6:27:38 AM

What's the point of the AI writer in that use case? Just send your prompt to my AI. And for that matter since prompting is in plain English, why not just send your prompt directly to me, and I'll choose to prettify it through an AI or not as I prefer.

by bandrami

3/31/2026 at 8:32:18 AM

The point is that it summarizes the context. It’s an important optimization, because context and tokens are both limited resources. I do something similar all the time when working with coding models. You’ve done a bunch of work, ask it to summarize it to the AGENTS.md file.

The more fully automated agents rely heavily on this approach internally. The best argument against it is that good harnesses will do something like this automatically, so you don’t need to explicitly do it.

Sending you the prompt wouldn’t help at all, because you’d have to reconstruct the context at the time the notes were written. Even just going back in version control history isn’t necessarily enough, if the features were developed with the help of an agent.

by antonvs

3/31/2026 at 8:35:19 AM

But I also have access to an AI that can summarize content. So why not just send me the content and the prompt you used? Or just the content, so I can summarize it however I want?

by bandrami

3/31/2026 at 2:21:39 AM

[dead]

by heyethan

3/31/2026 at 7:32:32 AM

Obvious better solution is to either a.) not write those release notes b.) try to figure out release notes format and process that leads to useful release notes. Once it is useful, you can decide to automate it or not - and measure whether automation is still achieving the goal.

What OP did was "we lacked communication, then created ineffective process that achieved nothing, so we automated the ineffective process and pay third party for doing it".

If you pay tokens for release notes that nobody reads, they you may just ... not pay tokens.

by watwut

3/30/2026 at 10:29:04 PM

This is kind of a fundamental issue with release notes. They are broadcasting lots of information, and only a small amount of information is relevant to any particular user (at least in my experience).

If I had a technically capable human assistant, I would have them filter through release notes from a vendor and only give me the relevant information for APIs I use. Having them take care of the boring, menial task so I can focus on more important things seems like a no brainer. So it seems reasonable to me to have an AI do that for me as well.

by dahfizz

3/31/2026 at 1:05:35 AM

I read a lot of release notes in my job and the idea that that is some kind of noticeable time sink that needs to be streamlined is bizarre to me. Just read the notes.

by bandrami

3/31/2026 at 7:13:44 AM

If your assistant is technical enough to know which parts apply to you and which do not, they likely don't need you to do the rest of the job either.

An LLM could do this by looking over the full codebase and release notes and do a shorter summary, bit probably at the cost of many tokens today.

by necovek

3/31/2026 at 3:04:49 AM

Or you could Ctrl-F.

by jimbokun

3/31/2026 at 12:26:13 PM

Sometimes PRDs might be boilerplate, but there’s been times where I sat down thinking “I can’t believe these dumbasses want to foo a widget”, but when writing the user story I get into their heads a little and I realize that widgets are useless if they can’t be foo’d. It’s not the same if AI is just telling me, because amongst the fire hose of communication and documentation flying at me, AI is just another one. Writing it myself forces me to actually engage, even if only a little more than at a shallow level.

by clickety_clack

3/31/2026 at 11:42:59 AM

> I've long considered writing to be the "last step in thinking".

I think it's often useful to use writing all the way through the process of thinking through something, rather than just at the end.

by jamesrcole

3/31/2026 at 12:41:20 PM

Yes, not all "writing" is actually thinking, and a lot of what we call writing at work is really just ritualized context transfer

by KolibriFly

3/30/2026 at 9:55:40 PM

> writing about something loosely and casually revealed to me something that fundamentally changed how I viewed a topic and really consolidated my thinking

You see the same thing in teaching, perhaps even more because of the interactive element. But the dynamic in any case is the same. Ideas exist as a kind of continuous structure in our minds. When you try to distill that into something discrete you're forced to confront lingering incoherence or gaps.

by user3939382

3/30/2026 at 11:51:57 PM

I never learned a subject faster than when I was suddenly forced to teach it!

by WCSTombs

3/31/2026 at 12:13:25 AM

Same here. And when you encourage students to ask good questions, that goes double ... you're forcedd to see how important their new perspectives are, and to create your own!

by 8bitsrule

3/31/2026 at 9:11:37 AM

I think of it not as "last step in thinking", but as "first contact with reality". Your mind is amazing and lying to you, filling in gaps and telling you everything is ok. The moment you try to export what's in your mind, math stops mathing. So writing is an important exercise.

by figassis

3/30/2026 at 10:10:06 PM

> agent write release notes for your agent in the future...

I have been going back to verbose, expansive inline comments. If you put the "history" inline it is context, if you stuff it off in some other system it's an artifact. I cant tell you how many times I have worked in an old codebase, that references a "bug number" in a long dead tracking system.

by zer00eyz

3/30/2026 at 10:32:31 PM

But how do you deal with communicating that some library you maintain has a behavior change? People already need to know to look at your code in order to read your comments.

by dahfizz

3/30/2026 at 11:49:16 PM

> communicating ... People

End users? Other Devs? These two groups are not the same.

As an end user of something, I dont care about the details of your internal refactor, only the performance, features and solutions. As a dev looking at the notes there is a lot more I want to see.

The artifact exists to inform about what is in this version when updating. And it can come easily from the commit messages, and be split for each audience (user/dev).

It doesn't change the fact that once your in the code, that history, inline is much much more useful. The commit message says "We fixed a performance issue around XXX". The inline comment is where you can put in a reason FOR the choice made.

One comes across this pattern a lot in dealing with data (flow) or end user inputs. It's that ugly change of if/elseif/elesif... that you look at and wonder "why isnt this a simple switch" because thats what the last two options really are. Having clues as inline text is a boon to us, and to any agent out there, because it's simply context at that point. Neither of us have to make a (tool) call to look at a diff, or a ticket or any number of other systems that we keep artifacts in.

by zer00eyz

3/31/2026 at 6:46:03 AM

> I think it's going to be awhile before the full impact of AI really works it's [sic] way through how we work.

This was definitely not written by AI. Granted their many drawbacks, present-day AI engines avoid this classic grammatical error.

However! Future, more advanced AI engines will slather their prose with this kind of error, to conceal its origins.

by lutusp

3/30/2026 at 10:24:11 PM

> For these cases, I think we should just drop the premise altogether that you're writing.

Sure.

> If you need to write a proposal for something as a matter of ritual, give it AI. If you're documenting a feature to remember context only (and not really explain the larger abstract principles driving it), it's better created as context for an LLM to consume.

No, no, no. You don't need to take that step. Whatever bullet-point list you're feeding in as the prompt is the relevant artifact you should be producing and adding to the bug, or sharing as an e-mail, or whatever.

by vkou

3/31/2026 at 1:43:32 PM

[dead]

by bhekanik

3/30/2026 at 7:27:08 PM

I agree with most of this, but my one qualm is the notion that LLMs "are particularly good at generating ideas."

It's fair enough that you can discard any bad ideas they generate. But by design, the recommendations will be average, bland, mainstream, and mostly devoid of nuance. I wouldn't encourage anyone to use LLMs to generate ideas if you're trying to create interesting or novel ideas.

by nerevarthelame

3/30/2026 at 7:31:04 PM

I have found the one of the better use cases of llms to be a rubber duck.

Explaining a design, problem, etc and trying to find solutions is extremely useful.

I can bring novelty, what I often want from the LLM is a better understanding of the edge cases that I may run into, and possible solutions.

by dummydummy1234

3/30/2026 at 7:48:18 PM

I always find folks bringing up rubber ducking as a thing LLMs are good at to be misguided. IMO, what defines rubber ducking as a concept is that it is just the developer explaining what their doing to themselves. Not to another person, and not to a thing pretending to be a person. If you have a "two way" or "conversational" debugging/designing experience it isnt rubber ducking, its just normal design/debugging.

The moment I bring in a conversational element, I want a being that actually has problem comprehension and creativity which an LLM by definition does not.

by monkaiju

3/30/2026 at 7:58:48 PM

Sometimes I don't want creativity though, I'm just not familiar enough with the solution space and I use the LLM as a sort of gradient descent simulator to the right solution to my problem (the LLM which itself used gradient descent when trained, meta, I know). I am not looking for wholly new solutions, just one that fits the problem the best, just as one could Google that information but LLMs save even that searching time.

by satvikpendem

3/31/2026 at 1:09:54 AM

> I'm just not familiar enough with the solution space

Neither is the LLM

by bandrami

3/31/2026 at 6:46:49 AM

(Trying to find where you might still see this)

I've read the thread and in my mind you're missing that LLMs increase the surface area of visibility of a thing. It's a probe. It adds known unknowns to your train of thought. It doesn't need to be "creative" about it. It doesn't need to be complete or even "right". You can validate the unknown unknown since it is now known. It doesn't need to have a measured opinion (even though it acts as it does), it's really just topography expansion. We're getting in the weeds of creativity and idea synthesis, but if something is net-new to you right now in your topography map, what's so bad about attributing relative synthesis to the AI?

by apsurd

3/31/2026 at 8:33:50 AM

Because if that's it we've made a ludicrously expensive i-ching.

by bandrami

3/31/2026 at 3:16:18 AM

No, this is the kind of thing LLMs are very good at. Knowing the specifics and details and minutiae about technologies, programming languages, etc.

by jimbokun

3/31/2026 at 6:54:25 AM

Oh Lord, no. Not at all. That's what they're terrible at. They are ok-ish at superficial overviews and catastrophically bad at specific minutiae

by bandrami

3/31/2026 at 2:49:28 PM

Honest, non-confrontational, non-passive aggressive question: Have you used any of the latest models in the last 6 months to do coding? Or frankly, in the last year?

by darth_aardvark

3/31/2026 at 3:45:20 PM

They note in another comment they don't even use search engines so I don't think they're the right person to ask regarding frontier models.

by satvikpendem

3/31/2026 at 6:03:55 PM

I'd ask them what tools they do use, but I doubt they'll see my comment; I'll see if I can mail it to them.

by darth_aardvark

3/31/2026 at 9:12:44 PM

(Why wouldn't I see your comment?)

I just don't use the web much anymore because the experience has degraded so much over the past several years and it has become decreasingly useful at work as well. I do sometimes need to search for a document and find Kagi pretty good for that, but the old way of using a search engine to kind of explore and discover stuff just isn't viable anymore, unfortunately .

I administer software for a living so I read a lot of documentation of that software but it comes.woth the software so I don't ever really need to search for it; I also read and participate in some forums and us the relevant IRC channels.

by bandrami

3/31/2026 at 4:00:40 PM

I have. And the people who say "use a frontier" model are full of it. The frontier models aren't any better than the free ones.

by bigstrat2003

3/31/2026 at 5:12:51 PM

What are you defining as free versus frontier, and for what purpose? For coding there is a big difference between Opus and GPT 5.3/4 versus Sonnet and other models such as open weight ones.

by satvikpendem

3/31/2026 at 9:09:36 AM

If there is something LLMs are good at it's knowing some obscure fact that only 10 other people on this planet know.

by imtringued

3/31/2026 at 8:20:57 PM

They're also very good at almost knowing an obscure fact that only 10 people know but getting a detail catastrophically wrong about it

by bandrami

3/31/2026 at 1:24:21 AM

Oftentimes it is though, good enough for my purposes.

by satvikpendem

3/31/2026 at 1:29:29 AM

If you're not familiar with the problem space, by definition you don't know whether or not that's the case. The problem spaces I do know well, I know the LLM isn't good at it, so why would I assume it's better at spaces I don't know?

by bandrami

3/31/2026 at 1:34:51 AM

I said familiar enough, not familiar. For example, let's say I'm building an app I know needs caching, the LLM is very good at telling me what types of caching to use, what libraries to use for each type, and so on, for which I can do more research if I really want to know specifically what the best library out of all the rest are, but oftentimes its top suggestion is, like I said, good enough for my purpose of e.g. caching.

by satvikpendem

3/31/2026 at 1:39:59 AM

I still don't get what you're saying. If you possess enough information to accurately judge the LLM's suggestions you possess enough information to decide on your own. There's not really a way around that.

by bandrami

3/31/2026 at 1:46:30 AM

Of course I'm deciding on my own, I'm not letting the LLM decide for me (although some people do). But the point is whatever the suggestion is is merely an implementation detail that either solves my problem or not, not sure what part of that is confusing. Replace LLM with glorified Google and maybe it's less confusing.

by satvikpendem

3/31/2026 at 1:49:53 AM

No, Google (at least back when it worked) ranked results based on the feedback of other users, so it was a useful signal.

by bandrami

3/31/2026 at 1:54:52 AM

Theoretically the LLM would weight more popular suggestions more too. Regardless you're reading too much into this, either use the LLM or don't, I'm not sure if someone else can convince you. As I said for my purposes of getting shit done it works perfectly fine and works more like a research tool than anything else, especially if it can understand my specific use case unlike general research tools like Google or Stack Overflow.

by satvikpendem

3/31/2026 at 3:17:24 AM

IDK man this sounds a lot like my junior devs saying "it works fine for me" as they hand in PRs that break prod

by bandrami

3/31/2026 at 3:20:27 AM

If you don't review the code it generates then that's still on you. There isn't an excuse for handing in breaking PRs like your juniors. It's a tool at the end of the day and it's the responsibility of the user to utilize it correctly.

by satvikpendem

3/31/2026 at 3:17:50 AM

Do you use search engines or do you just memorize all the world’s information?

by jimbokun

3/31/2026 at 3:52:57 AM

I don't use search engines for much of anything nowadays (does anybody still?) At work I read documentation if I need to learn something.

by bandrami

3/31/2026 at 9:25:33 AM

This is a very strange and contradictory situation. I'm not sure there's any point in engaging with you since there is nothing but a stream of weak dismissals farming for engagement.

You dismiss LLMs because of factual inaccuracy, which is fair, but now you're doubling down on an anti search engine stance, which is weird, because the modern substitute is letting LLMs either use search engines on your behalf or learn the entire internet with some error and you've dismissed both.

Yes, I'm the "backwards" guy who still uses search engines. We still exist.

by imtringued

3/31/2026 at 2:50:20 PM

I've noticed that HN can attract some of the most extreme people I've ever seen, and I suppose there is precedent in the tech world when I'm reminded of the story of Stallman not using a browser but instead sending webpages to his email where he then reads the content. It's literally nonsensical for 99.9999% of the population and I've read similar absurd things on HN as well.

This person not using LLMs is fine, I understand the argument like you said, but the double down on not using search engines either makes me not take anything they say seriously. Not to be too crass but it reminds me of this situation on the nature of arguing on the internet [0].

[0] https://www.reddit.com/r/copypasta/comments/pxb2kn/i_got_int...

by satvikpendem

3/30/2026 at 9:15:15 PM

Maybe it’s just a semantic distinction, which, sure. I guess I’d just call it research? It’s basically the “I’m reading blogs, repos, issue trackers, api docs etc. to get a feel for the problem space” step of meaningful engineering.

But I definitely reach for a clear and concise way to describe that my brain and fingers are a firewall between the LLM and my code/workspace. I’m using it to help frame my thinking but I’m the one making the decisions. And I’m intentionally keeping context in my brain, not the LLM, by not exposing my workspace to it.

by Waterluvian

3/31/2026 at 12:02:28 AM

Absolutely, the whole point of the rubber duck is that it's inanimate. The act of talking to the rubber duck makes you first of all describe your problem in words, and secondly hear (or read) it back and reprocess it in a slightly different way. It's a completely free way to use more parts of your brain when you need to.

LLMs are a non-free way for you to make use of less of your brain. It seems to me that these are not the same thing.

by WCSTombs

3/30/2026 at 7:55:39 PM

Sometimes people just need something else to tell them their ideas are valid. Validation is a core principle of therapeutic care. Procrastination is tightly linked to fear of a negative outcome. LLMs can help with both of these. They can validate ideas in the now which can help overcome some of that anxiety.

Unfortunately they can also validate some really bad ideas.

by dexwiz

3/30/2026 at 9:09:39 PM

I feel I've had the most success with treating it like another developer. One that has specific strengths (reference/checklists/scanning) and weaknesses (big picture/creativity). But definitely bouncing actual questions that I would say to a person off it.

by jrowen

3/31/2026 at 5:28:20 AM

My understanding was that rubber ducking was using a different portion of your brain by speaking the words.

The same discovery often happens when you explain a problem to a coworker and midway through the explanation you say "nvm, I know what I did wrong"

by __mharrison__

3/31/2026 at 5:22:59 AM

Do you not know any people who can help? Suddenly realised how lonely this sounds.

by globular-toast

3/31/2026 at 5:38:22 AM

Coordinating with people is hard and only gets harder as you live. And actually, finding someone that is earnestly receptive to hearing you pitch your half-baked startup ideas (just an example) and is in some capacity qualified to be at all helpful, is uhhh, not easy.

by apsurd

3/31/2026 at 5:48:51 AM

Really? Sometimes I think I'm not very social, then I read something like this. Don't you have any friends? Colleagues? Maybe that's the problem you need to solve rather than sitting in a room burning energy for endless token streams with LLMs that anyone has access to?

by globular-toast

3/31/2026 at 6:39:38 AM

Ah, I couldn't help myself practice my creative writing in the other reply. This reply is more constructive:

Both LLM based rubber-ducking and human discussions seem like a win win. I see no reason to jump to labeling unhealthy social connections just for pairing with LLMs.

by apsurd

3/31/2026 at 6:05:58 AM

lol. nobody is proposing this "well if not friends, then...". Appreciate your concern. I am fine.

This is for Internet posterity: thought-partnering with AI does not in fact make you a sorry socially inept loser that needs globular-toast to come in and help you dial that helpline.

Also: one's friends do not, in reality want to thought-partner about work issues, esoteric hobbies, and that million dollar idea.

Also: these friends, every and any one of them it seems, will not in fact speak the word of God into your ear as manifest insight for said work issue, million dollar idea, and so forth.

by apsurd

3/30/2026 at 7:53:48 PM

I'm torn.

I sometimes use them when I'm stuck on something, trying to brainstorm. The ideas are always garbage, but sometimes there is a hint of something in one of them that gets me started in a good direction.

Sometimes, though, I feel MORE stuck after seeing a wall of bad ideas. I don't know how to weigh this. I wasn't making progress to begin with, so does "more stuck" even make sense?

I guess I must feel it's slightly useful overall as I still do it.

by furyofantares

3/30/2026 at 8:20:23 PM

Mainstream ideas are often good. That's why they're mainstream. Being different for being different isn't a virtue.

That being said I don't think LLMs are idea generators either. They're common sense spitters, which many people desperately need.

by raincole

3/30/2026 at 10:50:55 PM

"by design, the recommendations will be average"

This couldn't be more wrong. The simplest refutation is just to point out that there are temperature and top-k settings, which by design, generate tokens (and by extension, ideas) that are less probable given the inputs.

by samusiam

3/30/2026 at 8:31:37 PM

I think it's just a confusing use of the term "generating." It's thinking of the LLM as a thesaurus. You actually generate the real idea -- and formulate the problem -- it's good at enumerating potential solutions that might inspire you.

by jrowen

3/31/2026 at 3:21:17 AM

That can be very valuable.

by jimbokun

3/30/2026 at 7:41:55 PM

All LLM output is always dry as fuck quite frankly. At all levels from ideas and concepts through to the actual copy. And that’s dotted with pure excrement.

I think the only reason it’s seen as good anywhere is there are a lot of tasteless and talentless people who can pretend they created whatever was curled out. This goes for code as well.

If I offend anyone I will not be apologising for it.

by dgxyz

3/31/2026 at 8:37:27 PM

> If I offend anyone I will not be apologising for it.

What you said is simply counterfactual, so no reason to be offended.

by BoredomIsFun

3/30/2026 at 8:31:18 PM

Agreed! No LLM is producing Pynchon, Calvino, Borges, Castaneda, Le Guin, Vonnegut.

by gavmor

3/30/2026 at 9:05:48 PM

I think that’s an unfair comparison. It can’t even produce Mills and Boon trash.

by dgxyz

3/30/2026 at 9:33:22 PM

But can they produce Tom Clancy or James Patterson?

by fcarraldo

3/31/2026 at 1:28:18 AM

Malcollm Gladwell

by lioeters

3/31/2026 at 2:51:17 PM

> curled out

This is the kind of understated yet thoroughly disgusting imagery an LLM couldn't come up with on its own, great example.

by darth_aardvark

3/31/2026 at 12:43:10 PM

So maybe the framing is: LLMs are good at mapping the landscape, but not at discovering new continents

by KolibriFly

3/30/2026 at 7:48:42 PM

Yes, I didn't get this portion at all. I feel as though letting an LLM brainstorm ideas for you would be worse in externally framing your thoughts than letting it write/proofread for you. If you pick one idea out of the 10 presented by the LLM, you are still confining yourself to the intersection of what the LLM thinks is important and what you think is important, because then you can never "generate" a thought that the LLM hasn't presented.

by NewsaHackO

3/31/2026 at 4:36:44 AM

Having to fix the LLM’s recommended solution is a good exercise though.

It’s like being a smart-ass for the right reasons, without any social consequences.

by ozozozd

3/30/2026 at 7:44:45 PM

LLMs can come sometimes up with novel or non-obvious insights...or just regurgitate google-like results.

by paulpauper

3/30/2026 at 7:38:01 PM

Asking the LLM better will return better than average and bland and mainstream results.

by j45

3/30/2026 at 7:42:42 PM

How does one ask better? Does better vary per model?

by paulryanrogers

3/30/2026 at 9:14:56 PM

Yes, its context based.

It's like asking a coworker. Providing too little information, or too much context can give different responses.

Try asking the model to not provide it's most common or average answer.

Been using it this way for 2, almost 3 years.

by j45

3/30/2026 at 7:44:37 PM

Why would they return "better" results?

by contagiousflow

3/30/2026 at 9:16:08 PM

Because AI is not a search engine. It does not return the best search result every time.

What it considers best, is what occurs most often, which can be the most average answers. Unless the service is tuned for search (perplexity, or google itself for example), others will not provide as complete an answer.

How well we ask can make all the difference. It's like asking a coworker. Providing too little information, or too much context can give different responses.

Try asking the model to not provide it's most common or average answer.

Been using it this way for 2, almost 3 years.

by j45

3/30/2026 at 7:13:55 PM

> When I send somebody a document that whiffs of LLM, I’m only demonstrating that the LLM produced something approximating what others want to hear. I’m not showing that I contended with the ideas.

This eloquently states the problem with sending LLM content to other people: As soon as they catch on that you're giving them LLM writing, it changes the dynamic of the relationship entirely. Now you're not asking them to review your ideas or code, you're asking them to review some output you got from an LLM.

The worst LLM offenders in the workplace are the people who take tickets, have Claude do the ticket, push the PR, and then go idle while they expect other people to review the work. I've had to have a few uncomfortable conversations where I explain to people that it's their job to review their own submissions before submitting them. It's something that should be obvious, but the magic of seeing an LLM produce code that passes tests or writing that looks like it agrees with the prompt you wrote does something to some people's brains.

by Aurornis

3/31/2026 at 1:15:07 AM

Lots of people want to use LLMs to produce things and nobody wants to consume the things LLMs produce. The market-clearing solution is to have some mechanism by which the producers pay the rest of us to consume the products. Whoever comes up with that framework will probably do very well.

by bandrami

3/31/2026 at 2:21:45 AM

They are basically destroying markets where there is value in producing content. No one wants to consume your LLM slop book, but if you spam Amazon with thousands and thousands of AI generated books, you will end up making a profit tricking some people in to buying them. Meanwhile the real books struggle to be seen in the mountains of shit.

by Gigachad

3/30/2026 at 9:16:24 PM

For me, drawing the line as to when you will leverage AI and when you won't comes down to a quote from Kurt Vonnegut: "Practicing an art, no matter how well or badly, is a way to make your soul grow, for heaven's sake. Sing in the shower. Dance to the radio. Tell stories. Write a poem to a friend, even a lousy poem. Do it as well as you possibly can. You will get an enormous reward. You will have created something."

Art is where I choose to draw the line, for both ideation and content generation. That work report I leveraged AI to help flush out isn't art, but my personal blog is, as is anything I must internalize (that is thoroughly understand and remember). This is why I have the following disclaimer on my blog (and yes, the typo on this page is purposeful!): https://jasoneckert.github.io/site/about-this-site/

by jasoneckert

3/30/2026 at 9:22:55 PM

This is the second time today that I come across this quote. I’m always happy to see Vonnegut in the wild. Let alone two times in a day!

by mat0

3/30/2026 at 7:14:45 PM

The title and of this article is Don't Let AI Write For You, when its point seems to be closer to Don't Let AI Think For You (see "Thinking").

This distinction is important, because (1) writing is not the only way to faciliate thinking, and (2) writing is not neccessarily even the best way to facilitate thinking. It's definitely not the best way (a) for everyone, (b) in every situation.

Audio can be a great way to capture ideas and thought processes. Rod Serling wrote predominantly through dictation. Mark Twain wrote most of his of his autobiography by dictation. Mark Duplass on The Talking Draft Method (1m): https://www.youtube.com/watch?v=UsV-3wel7k4

This can work especially well for people who are distracted by form and "writing correctly" too early in the process, for people who are intimidated by blank pages, for non-neurotypical people, etc. Self-recording is a great way to set all of those artifacts of the medium aside and capture what you want to say.

From there, you can (and should) leverage AI for transcripts, light transcript cleanups, grammar checks, etc.

by CharlesW

3/30/2026 at 9:17:50 PM

> Audio can be a great way to capture ideas and thought processes ... This can work especially well for people who are distracted by form and "writing correctly" too early in the process, for people who are intimidated by blank pages, for non-neurotypical people, etc. Self-recording is a great way to set all of those artifacts of the medium aside and capture what you want to say.

Yes, this is my process:

Record yourself rambling out loud, and import the audio in NotebookLM.

Then use this system prompt in NotebookLM chat:

> Write in my style, with my voice, in first person. Answer questions in my own words, using quotes from my recordings. You can combine multiple quotes. Edit the quotes for length and clarity. Fix speech disfluencies and remove filler words. Do not put quotation marks around the quotes. Do not use an ellipsis to indicate omitted words in quotes.

Then chat with "yourself." The replies will match your style and will be source-grounded. In fact, the replies automatically get footnotes pointing to specific quotes in your raw transcripts.

This workflow may not save me time, but it helps me get started, or get unstuck. It helps me stop procrastinating and manage my emotions. I consider it assistive technology for ADHD.

by rrherr

3/30/2026 at 8:33:31 PM

Writing is, however, a uniquely distinct and well-studied way to facilitate thinking.

I've definitely lost something since migrating my Artist's Way morning pages and to the netbook. (Worth it, though, to enable grep—and, now, RAG).

by gavmor

3/30/2026 at 7:56:44 PM

I would count direct dictation (eg someone writes down what you say, and that is the final text), as writing, in the context of producing a document (book, etc) that you intend others to read.

It's not the same thing as talking to someone (or a group) about something.

by lokar

3/30/2026 at 8:01:13 PM

I'm finding AI great to have a conversation with to flesh out ideas, with the added benefit it can summarize everything at the end

by keithnz

3/30/2026 at 8:05:18 PM

You're being steered without being aware of it.

by tines

3/30/2026 at 8:11:24 PM

Worse. You’re being steered along a circle

by whiplash451

3/31/2026 at 1:54:29 AM

Maybe they are aware of it?

I talk to other people. They influence me, steer me. I am okay with that.

by specvsimpl

3/31/2026 at 12:57:33 AM

not at all, it's very productive.

by keithnz

3/30/2026 at 8:22:49 PM

I do this a lot. Start by telling the AI to just listen and only provide feedback when asked. Lay out your current line of thinking conversationally. Periodically ask the AI to summarize/organize your thoughts "so far". Tactically ask for research into a decision or topic you aren't sure about and then make a decision inline.

Then once I feel like I have addressed all the areas, I ask for a "critical" review, which usually pokes holes in something that I need to fix. Finally have the AI draft up a document (Though you have to generally tell it to be as concise and clear as possible).

by XenophileJKO

3/31/2026 at 1:03:14 AM

I usually create a document/folder with my thinking on what I want to do, any background information that is relevant, conversations on the topic, technical manuals, links etc. Then enter a conversation and explore the problem space and do something very similar to what you are doing.

by keithnz

3/30/2026 at 8:51:33 PM

Yeah this is my problem. I can come up with ideas, but in writing my ideas never come out well. AI has helped me to express my ideas better. People who write well or are successful at writing sometimes fail to understand how uncommon is it to actually be good at writing. Shit is hard.

by paulpauper

3/31/2026 at 1:29:19 AM

Writing (unassisted) is probably the first step towards your own independent thoughts.

I'm reminded of that scene in "Ghost in the Shell" where some guy ask the Major why he is on the team (full of cyborgs) and she responds something along the line of "Because you are basically un-enhanced (maybe without a ghost?) and are likely to respond differently then the rest of us; Overspecialization is death."

I think a diversity of opinion is important for society. I'm worried that LLM's are going to group-think us into thinking the same way, believing the same things, reacting the same way.

I wonder if future children will need to be taught how to purposely have their own opinions; being so used to always asking others before even considering things on their own? The LLM will likely reach a better conclusion than you would on your own, but there is value in diverging from the consensus and thinking your own thoughts.

https://stephencagle.dev/posts-output/2025-10-14-you-should-...

by stephen_cagle

3/31/2026 at 2:27:44 AM

> I'm reminded of that scene in "Ghost in the Shell" where some guy ask the Major why he is on the team (full of cyborgs) and she responds something along the line of "Because you are basically un-enhanced (maybe without a ghost?) and are likely to respond differently then the rest of us; Overspecialization is death."

The scene you mentioned (amazing movie and holds up to this day) with the Major and Togusa:

https://youtube.com/watch?v=VQUBYaAgyKI

While I frequently use a similar argument, "We need someone 'untainted' to provide a different point of view", my honest opinion is somewhat more nuanced. These models tend to gravitate towards some sort of level of writing competence based on how good we are at filtering pre-training data and creating supervised data for fine-tuning. However, that level is still far below where my current professional writing is and I find it dreadful to read compared to good writing. Plenty of my students can not "see" this, as they are still below the level of current LLMs and I caution them to overly rely on LLMs for writing as they can then never learn good writing and "reach above" LLM-level writing. Instead, they must read widely, reflect, and also I always provide written feedback on their writing (rather than making edits myself) so that they must incorporate it manually into their own and when doing so they consider why I disagree with the current writing and hopefully learn to become better writers.

by ninjin

3/31/2026 at 2:57:07 AM

Bitterpilled. Wow, the audio mixing on that clip is great. I miss art like this. I'm afraid that nothing will recapture the way I felt watching GOTS the first time.

by prodigycorp

3/31/2026 at 5:14:32 PM

There are some many pieces of media that I wish I could fully scrub my memory of to experience for a second time.

by packetlost

3/31/2026 at 1:35:04 AM

Agree. Also, deference to consensus has always been a thing. "Best practices" is a thing at all levels of school and work. So it's very much a human thing, AI drastically compresses the timeline.

Importantly, it's not wrong. I say this as someone that seems to have the contrarian gene. I am worried too, that status-quo is now instant and all-consuming for anyone anywhere. But there's still hope in that AI compresses ramp up speed for anyone that would have the capacity to branch out anyway. So that's good.

by apsurd

3/31/2026 at 2:13:06 AM

I think LLM writing is probably a short term fad. It doesn't provide any value and no one likes reading it. That said, anywhere where value can be extracted by posting writing will be completely destroyed by LLMs as people try to grift their way in.

Either we find some way to filter out AI slop or the internet just stops getting used to post and consume content.

by Gigachad

3/31/2026 at 2:27:31 AM

It feels like we’re shifting the cost of writing from the author to everyone else.

Easier to produce, but now every reader has to do the filtering.

by heyethan

3/31/2026 at 2:41:35 AM

It's similar to the "workslop" problem where you can generate reports and documents rapidly, but the real work has shifted to the receiver who has to review and correct mistakes. In open source this has moved to the PR review being the actual work while generating the code and submitting it is worthless.

Obviously this is nonsensical long term. Why would I want to receive your LLM output when I could get the same output myself?

by Gigachad

3/31/2026 at 2:52:43 AM

If everyone can generate, output stops being the signal.

The bottleneck becomes judgment, and who’s willing to stand behind it.

by heyethan

3/31/2026 at 1:47:20 AM

[dead]

by alcor-z

3/30/2026 at 10:53:08 PM

I feel like LLMs are just forcing me to realize what writing actually is. For me, writing is basically a mental cache clear. I write things down so I can process them fully and then safely forget them.

If I let an LLM generate the text, that cognitive resolution never happens. I can't offload a thought i haven't actually formed - hence am troubld to safely forget about it.

Using AI for that is like hiring someone to lift weights for you and expecting to get stronger (I remember Slavoj Žižek equating it to a mechanical lovemaking in his recent talk somewhere).

The real trap isn't that we/writers willbe replaced; it's that we'll read the eloquent output of a model and quietly trick ourselves into believing we possess the deep comprehension it just spit out.

It reminds me of the shift from painting to photography. We thought the point of painting was to perfectly replicate reality, right up until the camera automated it. That stripped away the illusion and revealed what the art was actually for.

If the goal is just to pump out boilerplate, sure, let AIdo it. But if the goal is to figure out what I actually think, I still have to do the tedious, frustrating work of writing it out myself .

by enduku

3/30/2026 at 11:03:44 PM

This reminds me of a talk I attended many years ago given by the director of UChicago's writing program (and found a recording of the talk [0]), and his thesis was that writing IS the process of thinking. That talk changed the way I write and made writing a primary tool I reach for when I want to learn something new.

Words / language are the great technology we've made for representing ideas, and representing those ideas in the written word enables us to evaluate, edit, and compose those smaller ideas into bigger ideas. Kind of like how teachers would ask for an explanation in my own words, writing down my understanding of something I'd heard or read forced me to really evaluate the idea, focus on the parts I cared about, and record that understanding. Without the writing step, ideas would easily just float through my mind like a phantasm, shapeless and out of focus and useless when I had a tangible need for the idea.

I am glad I learned to write (both code and text) long before Claude came online. It would have been very hard to struggle through translating ideas from my head into words and words (back) into ideas in my head if I knew there was an "Easy button" I could hit to get something cogent-sounding. I hope a large enough proportion of kids today will still put in the work and won't just end up with a stunted ability to write/think.

[0] https://www.youtube.com/watch?v=vtIzMaLkCaM

by modriano

3/30/2026 at 11:11:55 PM

I'm not sure - being able to take something like a casual response to a post, and then changing it to iambic pentameter with the easy button could be a great way of learning how to do that off the cuff.

Though I’m unsure, this notion comes to mind:

to take a casual reply to a post

and turn it, with an easy button’s press,

to flawless iambic pentameter

might be the finest way to learn the art

of speaking thus extempore, off the cuff.

It's not perfect, but I envy the wealth of tools this generation has. They'll find uses for AI that leave us in awe.

by observationist

3/30/2026 at 11:42:08 PM

It's worth emphasizing here that you haven't changed it to iambic pentameter with an easy button. Your A lines are, but the Bs butcher it.

by Peritract

3/30/2026 at 11:36:44 PM

You can do that interactively with LLMs. Instead of aiming for a finished product you ask a question, then refine your understanding with more questions.

"So if that is true then this next statement is also true..." and the LLM will either agree or disagree.

There are lines between writing as a persuasive medium, writing as a didactic medium for teaching, writing as a creative/poetic medium, writing as the process of creation of marketable products, writing as a shared summary of specialist niche knowledge, and writing as an aid to personal comprehension.

Those are fundamentally different activities, They happen to use the same medium and there are some overlapping areas. But they're essentially different activities with different requirements and different processes.

There's also the point that LLMs can give you explicit control over features like reading age, social register, metaphor frames/ themes/imagery, sentence structure, grammatical uniqueness, rhythmic variation, and other linguistic markers.

The generic templated slop styles - rule of three, it's not this it's that, bullet points, "that's rare", strained weird or cringey similes, and the other tics - that appear all over social media are the low-skill default for AI writing. It doesn't have to be that crude or obvious, and learning how to push it beyond that is a skill in itself.

As is creating knowledge engineering systems that use agents to manage knowledge in useful ways, with writing as one possible output.

by TheOtherHobbes

3/30/2026 at 11:39:50 PM

> There's also the point that LLMs can give you explicit control over features like reading age, social register, metaphor frames/ themes/imagery, sentence structure, grammatical uniqueness, rhythmic variation, and other linguistic markers.

You already have this. Control over your writing is the default position.

by Peritract

3/31/2026 at 12:29:07 AM

> You can do that interactively with LLMs. Instead of aiming for a finished product you ask a question, then refine your understanding with more questions.

Yeah, I regularly spend a lot of time with Claude fleshing out ideas and scoping out features. I'm behind the times and just use the chat interface rather than Claude Code, so perhaps there are controls I'm not aware of, but there can't be any that make it correctly understand an under-specified idea, or even correctly understand an adequately specified idea.

For example, I've been playing around building a side-project that involves building out a safety-weighted graph to support generating safer bike routes. I was recently working on integrating traffic control devices (represented on OpenStreetMaps nodes) into the model where I calculate weights for my graph (I essentially join the penalty for the traffic control device onto the destination end of an edge) and Claude kept wanting to average that penalty by the length of the edge (as that makes sense for some other factors in my model like crashes, surface material, max speed, etc), but doesn't make sense for traffic control signals at intersections (as the length of an edge shouldn't change the risk a cyclist experiences going through an intersection). If I didn't have a well-developed ability to closely parse words to ideas, I could have very easily just taken the working model Claude generated and built more on top of it, setting up a dangerous situation where the routing algo would promote routes running a user through more intersections (which are the most dangerous place for cyclists).

I hope a comparable proportion of kids coming up today will spend the time and energy to understand the ideas behind the text and the code, but I really doubt 18-year old me would have had that wisdom. I would have been underspecifying what I wanted out of a lack of prerequisite knowledge, receiving slop, and either promptly getting lost in debugging hell or more likely the worse case of erroneously believing the slop satisfied the brief.

> There are lines between writing as a persuasive medium, writing as a didactic medium for teaching, writing as a creative/poetic medium, writing as the process of creation of marketable products, writing as a shared summary of specialist niche knowledge, and writing as an aid to personal comprehension. > Those are fundamentally different activities, They happen to use the same medium and there are some overlapping areas. But they're essentially different activities with different requirements and different processes.

In all of those areas, if you take away the human who can develop value-creating ideas into an accurate and high-fidelity written representation, you will just get slop. Developing ideas and representing them in words is the skill. There is no substitute.

by modriano

3/31/2026 at 4:26:26 AM

I realized that running one’s own writing through an LLM reduces the amount of information in it. Sort of like washing the nutrients of a fruit.

When we write about something, inevitably, things about us leak into our writing. How we think about this thing, our value judgments about it, how much we thought about it, whether our perspective and thoughts on it are aged or fresh all come through, even if we don’t intend to. All of this information builds trust, helps the reader empathize and see our point of view.

When our writing passes through an LLM, most of these are simply lost. An average expression of those thoughts with all the sharp edges - its character, essence - removed comes out.

All writing is opinionated, and when it runs through an LLM, it comes out opinion-less. I noticed that I don’t care for opinion-less writing. Or people.

One exception is the official Python documentation. I recently read some of the new documentation, and realized that it reads almost exactly as I first read it in 2010. I couldn’t believe it. Low opinion, high information density. I know for a fact that it has opinions in parts, but it’s shockingly infrequent.

by ozozozd

3/30/2026 at 7:10:40 PM

Outsource things that aren't valuable to you and your core mission. Do the things that are valuable to you and your core mission.

This applies at a business level (most software shops shouldn't have full-time book keepers on staff, for example), but applies even more in the AI age.

I use LLMs to help me code the boring stuff. I don't want to write CDK, I don't want to have to code the same boilerplate HTML and JS I've written dozens of times before - they can do that. But when I'm trying to implement something core to what I'm doing, I want to get more involved.

Same with writing. There's an old joke in the writing business that most people want to be published authors than they do through the process of writing. People who say they want to write don't actually want to do the work of writing, they just want the cocktail parties and the stroked ego of seeing their name in a bookshop or library. LLMs are making that more possible, but at a rather odd cost.

When I write, I do so because I want to think. Even when I use an LLM to rubber duck ideas off, I'm using it as a way to improve my thinking - the raw text it outputs is not the thing I want to give to others, but it might make me frame things differently or help me with grammar checks or with light editing tasks. Never the core thinking.

Even when I dabble with fiction writing: I enjoy the process of plotting, character development, dialogue development, scene ordering, and so on. Why would I want to outsource that? Why would a reader be interested in that output rather than something I was trying to convey. Art lives in the gap between what an artist is trying to say and what an audience is trying to perceive - having an LLM involved breaks that.

So yeah, coding, technical writing, non-fiction, fiction, whatever: if you're using an LLM you're giving up and saying "I don't care about this", and that might be OK if you don't care about this, but do that consciously and own it and talk about it up-front.

by PaulRobinson

3/30/2026 at 7:16:25 PM

> Outsource things that aren't valuable to you and your core mission.

When you outsource the generation and thinking, you're also outsourcing the self-review that comes along with evaluating your own output.

In the office, that review step gets outsourced to your coworkers.

Having a coworker who ChatGPT generates slides, design docs, or PRs is terrible because you realize that their primary input is prompting Claude and then sending the output to other people to review. I could have done that myself. Reviewing their Claude or ChatGPT output so they can prompt Claude or ChatGPT to fix it is just a way to get me to do their work for them.

by Aurornis

3/30/2026 at 10:09:33 PM

I have a feeling that the same idea absolutely does apply to code. Writing code is much closer to writing prose than it may seem. And the act of writing code also makes you think as you write. Even if you're writing boilerplate. Because how else would you uncover subtle opportunities to reduce the boilerplate and introduce new, better abstractions?

by nbaksalyar

3/31/2026 at 6:31:40 AM

I agree. The act of coding is when I do my thinking.

by JetSetIlly

3/31/2026 at 5:41:55 AM

That's a good point. When I first started programming web backends in PHP it quickly became apparent that what I was doing was very repetitive and formulaic. I could see that it would be possible to write some kind of framework to reduce the amount of repetition. I was nowhere near a good enough programmer to do it at the time, but I was maybe a couple of years ahead of Ruby on Rails and Django. I wasn't the only one to spot this.

What worries me is, if we had LLMs back then would we ever have bothered inventing RoR/Django? Why bother when we don't feel the pain of repetition every day? How would we ever develop instincts about the code if we can't even read it, much less write it? I feel like we are heading full steam into a depression. An era of stagnation where we forget how to do anything original. Idiocracy comes to mind. It might take a decade to correct.

by globular-toast

3/30/2026 at 8:14:44 PM

I had an interesting experience the other day. I've been struggling with some lyrics to a song I am writing. I asked Claude to review them, and it did an amazing job of finding the weak lines and best lines, and nearly perfectly articulating to my why they were weak or strong. It was strange because the output of the analysis almost perfectly mirrored my own thoughts.

When I asked it for alternatives/edits, they were not good however.

by locusofself

3/30/2026 at 9:33:06 PM

The author articulates perfectly what I think too. I’d recommend for everyone to read Writing to Learn by William Zinsser. It’s an incredible book showing that you can learn anything by writing about it.

With an LLM doing all the writing for you, you learn close to nothing.

by jilles

3/30/2026 at 8:29:58 PM

I feel very much the same way about debugging: it is through the process of repeatedly being slightly wrong that I come to actually understand what is happening.

Sometimes an LLM can shortcut me through a bunch of those misunderstandings. It feels like an easy win.

But ultimately, lacking context for how I got to that point in the debugging litany always slows me down more than the time it saved me. I frequently have to go backwards to uncover some earlier insight the LLM glossed over, in order to "unlock" a later problem in the litany.

If the problem is simple enough the LLM can actually directly tell you the answer, it's great. But using it as a debugging "coach" or "assistant" is actively worse than nothing for me.

by jcalvinowens

3/31/2026 at 1:57:48 PM

Using LLMs to write defeats the whole purpose. When you write you learn, you see where you had gaps in knowledge and understanding and it prompts you to go back and fill them up. It helps bring out and sharpen your reasoning. Using an LLM to write is like Using a forklift at the gym. You learn nothing.

by Dev-Chris

3/31/2026 at 12:25:38 AM

LLMs are not very good at generating ideas - unless you ask them to go crazy - then they can generate interesting ideas that quickly become repetitive. But the initial set is actually good in my own experience.

As for writing, we need to keep in mind that LLMs are tools that augment. So yes if you completely abdicate all responsibility to the LLM that is basically not constructive at all. But if you use it as a tool - what difference does it make? Spell and grammar checkers are also changing your text and of course I am exaggerating a little.

And I do think LLMs can help you think better but not in a default mode. It is not about prompting skills but making it work the way you want it. And that takes time because well, it is not deterministic and it requires understanding how you generally think and write. Most of the time might not be possible. For others it works really well, maybe because they write like an LLM?

Btw, we often forget that English is not native for majority of people on this planet.

IMHO using LLMs to express themselves clearly is many times better than remaining misunderstood.

by _pdp_

3/31/2026 at 10:44:37 AM

I wonder if anyone from HN would be willing to input their opinion into some features I'm building, which is really outputting LLM generated writing.

It's basically automating release notes and sprint summary's from existing systems like Jira and Linear. The target user is a product team, the target reader are business stakeholders who want to validate your existence. I've found this process to be stupidly time consuming for both our delivery manager, and whichever Dev they decide to tap on the shoulder to help contextualize tickets.

I feel like LLM's are a really good _summarizer_ and it can easily highlight if your tickets don't have enough context for actual people, if even an LLM can't write a summary with good enough context.

Idk, maybe it's a sensible usecase because you REALLY don't want novel ideas from the LLM in this case. You want it to tell you 1:1 what you did this sprint based on a list of issues.

by radicalriddler

3/31/2026 at 11:26:16 AM

I'm saying more and more "if you don't have the time to write it then I don't have the time to read it". Therefore my first impression is: if the process is so formulaic that you can automate it, then the content itself cannot be of any interest and the whole song and dance should probably be scraped altogether - think of a person asking ChatGPT "make this one-liner sound professional" and then sending it to someone who auto-summarizes it.

You mention that the target audience is "stakeholders who want to validate your existence", which makes me think that your target audience doesn't really care about what you actually did but rather about being heard. If that's the case then replacing the Delivery Manager (who is arguably doing a good job) with a machine that screams "I want to think about you as little as possible" is definitely a risk. It may work well to provide the DM with a first draft, though.

Disclaimer: I don't know your team nor stakeholders and I'm probably not in your industry.

by probably_wrong

3/31/2026 at 1:39:45 PM

I find it depends on context. I write a lot as an academic and author. When I need to generate functional content that has a specific purpose (knowledge base, transfer of information etc) I will use AI where it makes sense. Where I write to explore ideas, develop my own thinking, and connect with others in a very relational way, I intentionally do not use AI. Plus, when I do this, writing is an extension of my identity and I'd rather not give that away!

by 2020science

3/31/2026 at 5:14:42 PM

I do my own writing, but i can go so much faster really just writing all of my thoughts down and organizing them a bit and using llms to really just clean up and better organize my thinking.

by balderdash

3/30/2026 at 7:47:14 PM

I'm 100% an advocate for not using LLM for writing... But I'll tell you were I use them just for that. For ceremonies.

A large part of our work is about writing documents that no one will read, but you'll get 10 different reminders that they need to get done. These are documents that circulate, need approval from different stake holders. Everybody stamps their name on it, without ever reading it. I used to spend so much time crafting these documents. Now I use an LLM, the stakeholders are probably using an LLM to summarize it, someone is happy, they are filed for the records.

I call these "ceremonies" because they are a requirement we have, it helps no one, we don't know why we have to do it, but no one wants to question it.

by firefoxd

3/30/2026 at 7:59:26 PM

If it's very mechanical you probably aren't losing out on anything by using a machine to produce it.

by criddell

3/31/2026 at 3:51:13 PM

this maps to what im seeing in visual AI too. rendering quality is insane now, nobody disputes that. but most AI generated images look generic and flat because theres no direction behind them

same thing with writing imo. the output quality is technically fine but if you didnt wrestle with the ideas yourself the result reads like noone actually thought about it

writing forces you to confront where your thinking is vague. directing a photo shoot does the same thing actualy, the moment you have to commit to a specific angle or framing you discover what you realy want to say. skip that step and you get competent emptiness

by theAurenVale

3/31/2026 at 4:05:06 PM

If we see intent as a real part of information which influences the output substantially then this is perfectly explainable - the part of intent is not intrinsic to any artificial generation. Intent can only be given by external input and will be therefor always transformed by the statistical operations which happen during the generation of the output. The wrestling with the ideas is to find the sweetspot between intent, anticipated outcome and the anticipated recognition from other of the result. Even if we put in intent into the generation than the assumption of the consequences of the certain output in certain contexts is still missing. And the core of this problem is in my opinion the missing casual chain - without this no causal link between the generation and its consequences can be established and therefore no "deepness" can be found in the generated artifacts.

by HeinrichAQS

3/31/2026 at 6:15:47 AM

> The goal of writing is not to have written.

A certain percentage of comments I write on social networks end up being deleted before even clicking post. Sometimes after spending 10 or 15 minutes writing it.

The reasons are many, and I've long suspected I shouldn't feel like I'm throwing my time away when this happens.

Now I have a way to remember why.

by ghurtado

3/30/2026 at 7:28:47 PM

Nowadays my writing (and maybe all of ours) has totally devolved into "prompt-ese." Much like days of yore where we all approached Google searches with acrobatic language knowing how to specifically get something done.

Now? I am pushing so much of my writing into prompts into AI where I know the AI will understand me even with lots of typos and run-on sentences... Is that a bad thing? A good thing? I am able to be so much more effective by sheer volume of words, and the precision and grammar is mostly irrelevant. But I am able to insert nuances and sidetracks that ARE passing vital context to AI but may be lost on people. Or at least pre-prompt-writing people.

by bluepeter

3/30/2026 at 7:48:50 PM

> Nowadays my writing (and maybe all of ours)

No. Don't pretend your taking shortcuts is less questionable because everyone else is doing it too. We're not. Own it yourself, don't get me involved.

> I am able to be so much more effective by sheer volume of words

If you think value comes from volume of words you really need to understand writing better.

by add-sub-mul-div

3/30/2026 at 8:08:19 PM

Ok, but 3 generations ago, shorthand was a core skill that any competent professional could read and extract MORE value from than laboriously typeset prose. Something similar is probably happening now with prompt-ese and human-to-human (vs just AI) writing.

by bluepeter

3/30/2026 at 9:48:18 PM

> Nowadays my writing [] has totally devolved into "prompt-ese."

I've noticed this myself. Even in my Obsidian vault, which only I read and write in. I think it's a development into writing more imperatively, instinctually. Thinking more in instructions and commands than the speaking and writing habits I've developed organically over my life. Or just "talking to the computer" in plain English, after having to convert my thoughts to code anytime I want to make it do something.

I've been thinking about the role of "director" in media as an analogy to writing with LLMs. I'm working right now on an "essay," that I'm not sure I'll share with anyone, even family (who is my first audience). Right now, under the Authorship section, I wrote "Conceived, directed, and edited by Qaadika. Drafted by Claude", with a few sentences noting that I take responsibility for the content, and that the arguments, structure, audience, and editorial judgments are mine.

I had a unique idea and started with a single sentence prompt, and kept going from there until I realized it should be an essay. So the ideas in it are mine. The thesis is mine. I'm going back and forth with the LLM section by section. Some prompts are a sentence. Some are eight paragraphs. I can read the output and see exactly what was mine and what the LLM added. But my readers won't. They'll just see "Author: Qaadika" and presume every single word was mine. Or they'll sniff out the LLM-ness and stop reading.

I can make a film and call myself director without ever being seen in it. Is is the same if I direct the composition of words without ever writing any of the prose myself? Presuming I've written enough in prompts that it's identifiably unique from cheaper prompts and "LLM, fill in the blank".

We credit Steven Spielberg with E.T. But he didn't write the screenplay. He probably had comments on it, though. He didn't operate the camera. But he probably told the operators where to put it. He didn't act in it. But he probably told the actors where to stand and where to move and how to be. He didn't write the music. But he probably had a sense of when and where to place it in the audio. And he didn't spend every moment in the cutting room, placing every frame just so.

But his name is at the top. He must have done something, even if I can't point to anything specific. The "Vibe" of the film is Spielberg, but it's also the result of hundreds of minds, most of whole aren't named until the end of the film, and probably never read by most viewers.

His contribution to the film was instructions. Do this, don't do that. Let's move this scene to here. This shot would be better from this angle. The musical swell should be on this shot; cut it longer to fit.

So where, exactly, is "Spielberg" in E.T.? What can we objective credit him with, aside from the finished product: E.T. the Extra-Terrestrial: Coming June 1982?

by qaadika

3/31/2026 at 3:45:04 AM

Uh, Steven Spielberg is all over E.T. For one thing, he storyboarded the big special effects sequences. He collaborated closely on the screenplay because it was drawn from his own childhood experiences. He was the final say in casting. His relationship with editor Michael Kahn is famously collaborative.

I think comparing your telling an LLM what to do and Steven Spielberg directing a movie just shows a total lack of understanding of how movies are made, and also inflates your own sense of your self.

by throw4847285

3/31/2026 at 4:12:03 AM

> Uh, Steven Spielberg is all over E.T. For one thing, he storyboarded the big special effects sequences. He collaborated closely on the screenplay because it was drawn from his own childhood experiences. He was the final say in casting. His relationship with editor Michael Kahn is famously collaborative.

That's all meta. Trivia. Decisions he made or feedback he gave, that while influencing the final product cannot be observed in the final product (e.g. show me the actual Spielberg-drawn storyboard in the film; It doesn't exist, because the storyboard turned into a sequence of shots made by the cinematographer, instructing the camerawoman to point the camera at the actors lit by the gaffers, or into a work breakdown strucutre then followed by the SFX team painstakingly drawing it frame by frame). No one but Spielberg could say "That part was me, this part was Kahn's." I can't find any of that out just by watching the movie. When I engage with a piece of media, I presume the author is dead. What is in the media is canon, and what's not in it isn't. The behind the scenes, or the director's biography, or the interviews aren't part of the art. Art shouldn't rely on "Oh it's good, or even better than you thought it was once you know this cool fact or that wild story from production."

Star Wars isn't good only because George Lucas was a genius, or because they spent a lot of time on the models and tried a cool new text intro sequence, or because of any of the other novel effects. Lots of movies spend a lot of time in production, with a lot of experts and a lot of novel ideas, and still fail. Star Wars is good because the finished movie is good. We credit Star Wars generally as being George Lucas' brainchild, but if you know the backstory, it's only good because he had good editors to reign him in. But that's meta. Nobody knew that in 1977. They just knew they enjoyed the movie and it said "written and directed by George Lucas."

When I watch the movie I don't see the storyboard, or the redlines in the screenplay, or the casting notes, or the conversations and discussions with Kahn. All I know from the movie is the credits, and the credits don't say "Written by Melissa Mathison (with close collaboration by Spielberg based on his childhood experience)". Those are, from a lay viewer's POV, 'facts not in evidence.'

E.T. was a single example. I'm comfortable claiming my argument applies to all directors of all films, and all forms of art that are created by more than one person. Another example: "Over The Edge" and "Off The Wall", two books about deaths in US national parks. They each have two authors. Only one author co-wrote both of them. To whom do I credit my love for those books? Only to Ghiglieri, since I can see the consistent tone between them? That would be unfair to Myers and Farabee. Only to Myers and Farabee, because they're the park rangers that witnessed a number of the emergencies and deaths? That would be unfair to Ghiglieri. What about the editors, who surely worked hard to make books that are basically a list of stories about death interesting as a cohesive narrative. My only option is to credit all the authors, and everyone else involved, equally, and not try to break down paragraphs between "this author wrote this one, and that author wrote that one." They didn't distinguish, so I can't either. [1]

I'm all over my essay. I drafted and organized the original outline. I've made substantial changes to the order of paragraphs and what and how the arguments are built and developed based on my personal experiences. I am the final say for whose quotes are included and which ones are cut. My relationship with myself is famously collaborative (famous among my family and friends).

None of that matters to the reader. Whether I wrote it myself or with a friend, or used a ghostwriter, or used an LLM, the audience is going to credit or blame it on the name at the top. My papers in college weren't graded based on whether I spent 300 hours on them and revised them 20 times, or whether it was I or my classmate who coined that pithy line I then used throughout, or because I used niche knowledge about the subject I knew before taking the class. That's trivia. They were graded on the final single copy I submitted. I got once chance.

The only difference between an essay of mine being written by a ghostwriter I hired and an LLM is that the LLM output is always going to sound like an LLM. They are identical in that neither of them are "me". The ghostwriter will sound either like the ghostwriter or like the ghostwriter trying to write like me. But whether I hired a ghostwriter and published their work under my name, or if I used an LLM and the audience didn't notice, at the end of the day they'll credit or blame me entirely, because my name is at the top, no different as if I'd written the entire thing from scratch. I have no excuses except for the final product.

For this essay specifically, If I ever did release it or publish it, it would be under my real name. Firstly because I've never liked being "anonymous" online (I feel I never act or write like myself unless I'm speaking under my own name; opposite of most in my experience), and second because I would want the reader to know that there's a human they can credit or blame for it. I guess for me that's the tradeoff. When anonymous I won't use LLMs, because my ethos comes from being (and sounding) like a human being who merely doesn't want to share their name. Under my real name, however, I feel more comfortable saying "directed and edited by [real name], drafted by [llm]," because then the reader can decide if the ethos associated with my real name and affilations is strong enough to justify reading a logos and pathos that the human freely admits is not entirely from their own fleshy brain.

[1] They do, actually, at times. When one of the authors was directly involved in one of the stories and is recounting their personal experience, they will write "I (Myers)..." or "I (Farabee).." Aside from that they do not say who wrote what, or who influenced who.

by qaadika

3/30/2026 at 7:26:00 PM

You quote this:

> LLM-generated writing undermines the authenticity of not just one’s writing but of the thinking behind it as well. If the prose is automatically generated, might the ideas be too?

Given your endorsement of using LLMs for generating ideas, isn't this the inverse of your thesis? The quote's issue with LLMs is the ideas that came out of them; the prose is the tell. I don't think they'd be happy with LLM generated ideas even if they were handwritten.

I feel like this post is missing the forest for the trees. Writing is thinking alright, but fueling your writing by brainstorming with an LLM waters down the process.

by fleebee

3/30/2026 at 7:31:10 PM

I take it ideas to mean “well scoped replies” like “list pro and con if this vs that got flow”. While someone might think of N issues the LLM might present another six out of which three or four don’t make sense but one or two do. Might be worth adding these in the document.

by atmosx

3/31/2026 at 12:39:22 PM

A lot of the value is in discovering the holes in your own reasoning while trying to make it legible to someone else

by KolibriFly

3/30/2026 at 8:13:30 PM

> LLMs are useful for research and checking your work.

I have to disagree that it's good for LLMs to do the research, depending on the context.

If by "useful for research" you mean useful for tracking down sources that you, as the writer, digest and consider, then great.

If by "useful for research" you mean that it will fill in your citations for you, that's terrible. That sends a false signal to readers about the credibility of your work. It's critical that the author read and digest the things they are citing to.

by D13Fd

3/31/2026 at 5:23:20 AM

The cognitive benefit of writing comes from externalizing and evaluating ideas under friction. LLM conversation provides more friction per unit time than solo drafting because you're constantly reacting to a semi-competent interlocutor who gets it almost-right in ways that force you to articulate exactly where it went wrong.

I checked my logs and I write 10 words in chat for 1 word in LLM output for final text. So it's clearly not making me type less. I used to type about 10K words per month now I type 50-100K words per month (LLM chat is the difference).

The surplus capacity provided by LLMs got reinvested immediately in scope and depth expansion. I did not get to spend 10x less time writing.

by visarga

3/30/2026 at 10:14:32 PM

I wrote about something similar this week[0]. Beyond doing your own writing and understanding the outcomes that you want clearly, there is an increasing need for us to write our own docs/tickets as all of these are also the prompts.

Docs written by agents almost always produce mediocre results.

[0] https://news.ycombinator.com/item?id=47579977

by bushido

3/30/2026 at 8:19:27 PM

The way I approach having LLM help with writing documents like this is to have it help me clean up my writing, not write the substance of it.

I tend to do extensive research (that process in itself would involve LLMs too, sure) in a tech plan, a product spec, etc. and usually end up with a really solid idea in my head and like say, five critical key points about this tech plan or product spec that I absolutely must convey in this document.

Then I basically "brain dump" my critical key points (including everything about it, background/reasoning, why this or that way, what's counterintuitive about it, why is this point important, etc.) in pretty messy writing (but hitting all the important talking points) to a LLM prompt, asking it to produce the document I need (be it tech plan, product spec, whatever) and ask it to write it based on my points.

The resulting document has all the important substance on it this way.

If you use LLM to produce documents like this by a way of a prompt like "Write a tech plan for the product feature XYZ I want to build", you're going to get a lot of fluff. No substance, plenty of mistakes, wrong assumptions, etc.

by godot

3/30/2026 at 9:22:26 PM

I use LLMs for compilation of information sometimes. I’m a teacher and I sometimes use it to hack together a quick worksheet for my students. I see they need some practice with a certain concept and I get the LLM to generate a LaTeX doc which I compile to pdf. I find it can be particularly useful at document creation but it is horrible at writing anything that’s in sentence form. It stinks and is not great at conveying my voice.

I will sometimes write a lesson and have an LLM generate a quiz and give me feedback on my content search for mistakes or unclear content.

I have also used it to help me structure a document. I give it requirements it makes a general outline that I then just fill in with my own words.

I’m still not sure how to approach my students’ uses of an LLM. I am loath to make a hard and fast rule of no LLMs because that’s ridiculous. I want to encourage appropriate use. I don’t know what is appropriate use in the context of a student.

An LLM can be a great learning tool but it also can be a crutch.

by Jbird2k

3/31/2026 at 5:57:01 AM

For the same reason I prefer writing with a keyboard instead of handwriting, I prefer writing with a LLM than manually typing these days. I end up spending the same amount of time rewriting and editing a text than I would have otherwise but instead of worrying about grammatical mistakes or the flow of the text, I spend 100% of my time getting my idea across.

Of course you can be lazy with LLMs and I can usually tell if it’s one—shotted as well, but if you are a good writer, you’ll get 10x out of using an LLM to write.

by instalabsai

3/31/2026 at 8:18:19 AM

I recently added an AI Policy to my blog (https://rory.codes/ai-policy). The assumption now appears to be AI guilty unless proven innocent which I think is a travesty for readers and writers alike.

by mrroryflint

3/31/2026 at 4:37:12 PM

I use bullet points a lot in my writing and it seems that, specifically is causing people to accuse me of either:

    * Being an AI bot.
    * Using an LLM to generate it.
It's driving me bonkers!

by riskable

3/30/2026 at 7:15:37 PM

>Letting an LLM write for you is like paying somebody to work out for you.

It's worse than this. If someone is working out for you, they still own the outcome of that effort (their physique).

With an LLM people _act_ like the outcome is their own production. The thinking, reasoning, structural capability, modeling, and presentation can all just as easily be framed _as your creation_.

That's why I think we're seeing an inverse relationship between ideation output and coherence (and perhaps unoriginality) and a decline in creative thinking and creativity[0]

[0] https://time.com/7295195/ai-chatgpt-google-learning-school/

by fraywing

3/30/2026 at 8:19:11 PM

Writing down specs for technical projects is a transformational skill.

I've had projects that seemed tedious or obvious in my head only to realize hidden complexity when trying to put their trivial-ness into written words. It really is a sort of meditation on the problem.

In the most important AI assisted project I've shipped so far I wrote the spec myself first entirely. But feeding it through an LLM feedback loop felt just as transformational, it didn't only help me get an easier to parse document, but helped me understand both the problem and my own solution from multiple angles and allowed me to address gaps early on.

So I'll say: Do your own writing, first.

by 6thbit

3/30/2026 at 7:29:12 PM

I fully agree with the sentiment of the article. I will say that I feel I've had some success in having an LLM outline a document, provided that I then go through and read / edit thoroughly. I think there's even an argument that this a) possibly catches areas you I have forgotten to write about, and b) hooks into my critique mode which feels more motivated than author mode sometimes (I'm slightly ashamed to say). This does come at the cost however of not putting my self in 'researcher' mode, where I go back through the system I'm writing about and follow the threads, reacquainting myself and judging my previous decisions.

by gbro3n

3/30/2026 at 9:19:51 PM

The rational response to document overload is to mostly ignore it.

Workers and managers in organizations are being overwhelmed by large numbers of documents because it's so easy to bang something off that's 'about right' and convincing enough.

But there's still some value in writing documents. I agree with the original article - it's all about thinking. My take on it is this: it's possible to use LLMs to write decent documents so long as you treat the process as a partnership (man and machine), and conduct the process iteratively. Work on it, and yes, think.

by fallinditch

3/31/2026 at 5:16:22 PM

The trust component is so critical here. When I get halfway through reading a design doc and hit a part that's obviously slop, it really hurts my confidence in the project and in any faith in the developer having done their due diligence.

Certain communications, especially technical writing, are "expensive" both in terms of the effort of the author(s), and in terms of the person-hours of people reading them to gain understanding. Like mission-critical code, they should be written and reviewed with care, and at the very least heavily edited from an automated LLM output to be unrecognizable as such.

I personally don't use LLMs at all in my designs and I remain skeptical of the value proposition for those who do.

by Willish42

3/30/2026 at 10:52:28 PM

I think we're beyond this already for most Claude/ChatGPT users they just assume that everything is written by AI because that's what they'd do. Credibility has been lost, but certainly there are many cases where human thinking will improve the final artifact, I think it's enough to focus on improving quality vs some moral high ground.

by thisisrobv

3/30/2026 at 8:13:40 PM

There's a lot of ways to use an LLM, the least effective is automating an entire process- yet it's the most compelling.

To your point, it's entirely a balance. I personally will record a 10-15 minute yap session on a concept I want to share and feed it to an agent to distill it into a series of observations and more compelling concepts. Then you can use this to write your piece.

by bboynton97

3/31/2026 at 1:15:30 AM

I am not a native English speaker, I can do reading fine, but writing is a big trouble for me, especially formal writing and academic writing, AI can do help me writing better, of course I'll review what AI generated.(Above all are not written by AI obviously.)

by zhoujing204

3/31/2026 at 12:10:45 AM

This reduces writing to one concept: thinking and the writing is just a byproduct. But writing is also presentation and also communication.

There is nothing wrong with speechwriters. Various authors spilled out their thoughts in rough format and had writers turn them into better structured, prosed and understandable projections. Hand writing each sentence that is presented as an end-product to the reader doesn't solve that problem.

Forcibly coupling the two is an arbitrary choice that may be a valid tradeoff for some and not so for others, and not so for _all_ writing.

I'm not good at looping through a document with proper english prose. My writing is raw, particular, and I gloss over a lot of details. LLMs help me turn my shitty extensive notes in bad grammar and syntax, into shareable and understandable artifacts. They help me turn more of my thoughts into ingestable communication by others. Without AI, I communicate less of my thoughts due to friction. My thoughts are formed and authored and written, but not in a format consumable by anyone else.

Ebikes help older riders keep riding.

by AYBABTME

3/31/2026 at 2:04:49 PM

Keep in mind that thoughts similar to yours produce the same output from an LLM. You may be thinking “my thoughts are original” and I would agree, but we won’t be able to see the original parts when it runs through an LLM.

by ozozozd

3/30/2026 at 7:15:33 PM

> Letting an LLM write for you is like paying somebody to work out for you.

This. This is the big distinction. If you like something and/or want to improve it, you do it yourself. If not, you pay someone else to do it. And I think that's ok.

But I guess some people either choose a wrong job or had no other option. I'm happy to not be in that group.

by TrianguloY

3/30/2026 at 10:30:32 PM

I find LLM's particularly good at filtering and distilling a large rambling idea that I have into a well-formatted and coherent paragraph, and also removing any statements that would be perceived as overly argumentative or rude.

In essense, LLM's are a much better spell check.

by codexb

3/31/2026 at 4:40:54 PM

It's interesting to see so many people agree with this perspective when it comes to articles yet disagree when it comes to writing software.

Perhaps some form of Gell-Mann Amnesia, people are better at recognizing good articles than they are at recognizing good software. Combined with a vibe coding effect of never actually reading the source, and thus recognizing how bad it is.

by notnullorvoid

3/30/2026 at 7:49:24 PM

> They are particularly good at generating ideas.

I think it's the opposite. People have ideas and know what they want to do. If I need to write something, I provide some bullet points and instructions, and Claude does the rest. I then review, and iterate.

by drnick1

3/30/2026 at 8:19:22 PM

My blog is 100% written by me. You can tell because of all the typos.

I don't really understand why people will create blogs that are generated by Claude or ChatGPT. You don't have to have a blog, isn't the point of something like a blog to be your writing? If I wanted an opinion from ChatGPT I could just ask ChatGPT for an opinion. The whole point of a blog, in my mind, is that there's a human who has something that they want to say. Even if you have original ideas, if you have ChatGPT write the core article makes it feel kind of inhuman.

I'm more forgiving of stuff like Grammarly, because typos are annoying, though I've stopped using it because I found I didn't agree with a lot of its edits.

I admit that I will use Claude to bullshit ideas back and forth, basically as a more intelligent "rubber duck", but the writing is always me.

by tombert

3/31/2026 at 9:12:34 AM

Cut everyone. Paste er posters. Its 80s agi pop. Shoot in all directions. Twice.

by totierne2

3/31/2026 at 12:38:45 AM

I learned some of this the hard way. I did the thinking and the distillation, but I had AI write the prose and that's all a lot of people saw, the AI tell-tales.

by jmatthews

3/30/2026 at 9:00:43 PM

the gym analogy lands. you dont hire someone to do your reps, but its fine to hire a trainer to critique your form. that distinction matters when thinking about how to actually use these tools.

by vicchenai

3/30/2026 at 7:52:41 PM

Agree with the underlying point: "don't let an LLM do your thinking, or interfere with processes essential to you thinking things clearly through."

My own experience, however, is that the best models are quite good and helping you with those writing and thinking processes. Finding gaps, exposing contradictions or weaknesses in your hypotheses or specifications, and suggesting related or supporting content that you might have included if you'd thought of it, but you didn't.

While I'm a developer and engineer now, I was a professional author, editor, and publisher in a former life. Would have _killed_ for the fast, often excellent feedback and acceleration that LLMs now provide. And while sure, I often have to "no, no, no!" or delete-delete, "redraft this and do it this way," the overall process is faster and the outcomes better with AI assistance.

The most important thing is to keep overall control of the tone, flow, and arguments. Every word need not be your own, at least in most forms of commercial and practical writing. True whether your collaborators are human, mecha, or some mix.

by jonathaneunice

3/30/2026 at 9:13:33 PM

was curious how it might look if we started seeing prompts/steering inline with the finished product: https://dvelton.github.io/trace/examples/the-post.html

by deevelton

3/31/2026 at 3:34:29 PM

First impression is love that! Need to study it more.

We need—or at least I need—a better UI/tool to manage the sequence of edits and collaboration, drafting, rubber-ducking, and evaluation that AI tools provide. Including the prompts and edits is a nice feature, though I would also like more comparison "where we started" vs "where we are now."

by jonathaneunice

3/30/2026 at 9:15:57 PM

Writing well is a superpower, and even better in a world where “no one” is writing well.

LLMs write poorly because most people write poorly. They didn’t cause it, they simply emulate it.

by borski

3/30/2026 at 9:32:34 PM

I agree. As the amount of cheap content grows, I think we will come to like condensed to the point articles like this one.

by arbirk

3/30/2026 at 7:47:17 PM

>Don't Let AI Write For You

>Essay structured like LLM output

Hmmm...

by windowliker

3/30/2026 at 8:59:46 PM

I think a lot of people run their posts through an LLM after writing it and edit it accordingly, resulting in an output somewhere between human-made and AI-generated.

by paulpauper

3/31/2026 at 3:05:02 AM

I find it interesting that there are some who want to tell me how to use AI. Are these people politically on the left? I'm genuinely interested to know what kind of person does this?

by openenough

3/30/2026 at 8:37:07 PM

I write far better than any llm ... I've tried to get them to help me with writing, they always fuck it up.

The biggest problem is they don't understand the time effort tradeoff between understanding and language so they don't know how to pack the densities of information properly or how to swim through choppy relationships with the world around them while effectively communicating.

But who knows, maybe they're more effective and I'm just an idiot.

by kristopolous

3/30/2026 at 9:16:44 PM

Bad AI writing is bad, and obvious once you know what you're looking for. Nobody wants to read it.

Good AI writing takes time, can be valuable, and can inspire readers to send in praise about how insightful or thorough a particular article was (speaking from experience). Why do it? The same reason we all use Claude all day to write code - it is faster / you can do more of it. But in the same way that a junior engineer vibing code is a lot more likely to produce slop than a grizzled senior who is doing the same thing, you have to know what you are doing to get good results out of it.

Pushing back against AI writing in 2026 is like the people pushing back against AI coding in 2024. It's not a question of if it will happen. It's a question of how to do it well. ;)

by smallerfish

3/30/2026 at 10:09:38 PM

Curious if people are actually using AI to publish real scientific papers?

by njmicali

3/30/2026 at 9:55:49 PM

so many thinkers/writers mistake writing prose for thinking. including Paul Graham. this is ABSOLUTELY not true.

You can write for yourself, through thinking, and it can be sloppy, bc you're doing it for yourself.

A homecooked meal does NOT look like a Thanksgiving meal.

Most of these writers think that all writing looks like Thanksgiving meals- they aren't. Homecooked meals can be simple, delicious, and not meant to cater for 20+ guests, from family to friends. Each with their own weird peculiarities and food allergies.

writing for thinking should be more like home cooked meals- really disorganized, really sloppy, with none of the presentation, but with all the nutrition and comfort that comes with home cooked meals.

writing is thinking for me, but writing looks like this post; something shot from the hip, and unpolished, to be consumed for myself. it'll probably be downvoted, and that's absolutely ok

by yawnxyz

3/30/2026 at 7:20:07 PM

Well said. The most important part of writing is thinking. LLMs cannot do the thinking for you.

This is why I’m bearish on all of the apps that want to do my writing for me. Expanding a stub of an idea into a low information density paragraph, and then summarizing those paragraphs on the other end. What’s the point?

Unless the idea is trivial, LLMs are probably just getting in the way.

by janalsncm

3/31/2026 at 5:26:53 AM

I still don't understand the concept of "using an LLM to write". You have to write your context in. That is your writing! Just send me that!

I wrote about this ages ago. Just send me the prompt! https://blog.gpkb.org/posts/just-send-me-the-prompt/

by globular-toast

3/31/2026 at 1:23:07 AM

Write down the core facts, questions and answers in a very rough around the edges draft. Don't bother spell or grammar checking. But make sure the facts are all there.

Then, ask an LLM to fix up the article, make it look professional and fill in the "fluff". Explicitly tell it to not include facts not already in the document.

Review the document and if its all good, its done.

by aussieguy1234

3/30/2026 at 9:55:01 PM

> The goal of writing is not to have written. It is to have increased your understanding, and then the understanding of those around you. When you are tasked to write something, your job is to go into the murkiness and come out of it with structure and understanding. To conquer the unknown.

Adults now have to be explained, like children, that you can’t just stream info through the eyes and ears and expect to learn anything.

That’s one explanation for this apparent need; there are also more sinister ones.

by keybored

3/30/2026 at 7:29:04 PM

Letting an LLM write for you is like paying somebody to work out for you.

The problem with writing is the feedback tends to be inconsistent. With going to the gym you can track your progress quantitatively such as how fast or far you can run or weight lifted, but it's sometimes hard to know if you're improving at writing.

by paulpauper

3/30/2026 at 8:10:32 PM

real fact and it is an interesting point.

by sikiri_app

3/31/2026 at 3:22:36 PM

[dead]

by genadym

3/31/2026 at 1:32:38 PM

[dead]

by bhekanik

3/31/2026 at 5:13:15 PM

[dead]

by matevz_smallPMS

3/31/2026 at 4:21:34 PM

[dead]

by DevKoan

3/31/2026 at 12:24:16 PM

[dead]

by alex1sa

3/31/2026 at 11:13:19 AM

[dead]

by matevz_smallPMS

3/31/2026 at 12:05:09 AM

[dead]

by wei03288

3/31/2026 at 8:09:44 AM

[dead]

by microbuilderco

3/30/2026 at 11:53:26 PM

[dead]

by ryguz

3/31/2026 at 8:46:30 PM

[dead]

by berz01

3/31/2026 at 10:32:57 AM

[dead]

by Yash16

3/30/2026 at 6:50:36 PM

[flagged]

by simonreiff

3/31/2026 at 4:32:00 PM

[dead]

by black_13

3/30/2026 at 8:32:46 PM

[flagged]

by unsignedint

3/31/2026 at 11:22:25 AM

[dead]

by antryu

3/30/2026 at 8:39:27 PM

why do I feel that this post itself was written by AI

by fazkan