3/30/2026 at 7:16:59 PM
I've long considered writing to be the "last step in thinking". I can't tell you how many times an idea, that was crystal clear in my mind, fell apart the moment I started writing and I realize there were major contradictions I needed to resolve. Likewise I also have numerous times where writing about something loosely and casually revealed to me something that fundamentally changed how I viewed a topic and really consolidated my thinking.However, there is a lot of writing that is basically just an old school from of context engineering. While I would love to think that a PRD is a place to think through ideas, I think many of us have encountered situations, pre-AI, where PRDs were basically context dumps without any real planning or thought.
For these cases, I think we should just drop the premise altogether that you're writing. If you need to write a proposal for something as a matter of ritual, give it AI. If you're documenting a feature to remember context only (and not really explain the larger abstract principles driving it), it's better created as context for an LLM to consume.
Not long ago my engineering team was trying to enforce writing release notes so people could be aware of breaking changes, then people groaned at the idea of having to read this. The obvious best solution is to have your agent write release notes for your agent in the future to have context. No more tedious writing or reading, but also no missing context.
I think it's going to be awhile before the full impact of AI really works it's way through how we work. In the mean time we'll continue to have AI written content fed back into AI and then sent back to someone else (when this could all be a more optimized, closed loop).
by roadside_picnic
3/31/2026 at 5:22:39 AM
> I can't tell you how many times an idea, that was crystal clear in my mind, fell apart the moment I started writing and I realize there were major contradictions I needed to resolve.perhaps because writing is a third order exercise.
first order being in thinking in one's mind, one has to talk with oneself. second order is talking to someone else we are directing thoughts towards that person, but in writing we have to imagine the reader and then write.
https://alandix.com/academic/papers/writing-third-order-2006...
by the-mitr
3/31/2026 at 5:39:39 AM
I think writing is writing to an audience which includes yourself.When you're thinking you are speaking in your mind which means you can not really listen to yourself at that same time. You don't hear yourself from yourself. You are too busy talking (in your head to yourself) that you can not really think about what you just said to yourself. You are producing language, not consuming it
But when you read what you have written, you can pause reading and do some thinking about what you just read. That makes it easier to understand what you are saying, and more easily see logical errors or omissions in it.
by galaxyLogic
3/31/2026 at 2:26:40 PM
I think this is correct. I told a coworker that when I edit my email drafts they get shorter. He was surprised and said that his get longer. I trim and refine. Sure, I add details that I missed at first. But I also create better structure and remove ambiguity or unnecessary words.Yesterday, I was working on an email for someone who I was trying very hard not to overwhelm with technical details. I cut it roughly in half in terms of words, but I also turned paragraphs in single lines of sequenced steps or concise statements without decorating the text with unneeded aphorisms / commentary.
I was pretty pleased with the end result. This is only possible because of careful rereading and reflection (including knowing my intended audience). I imagine an LLM can approximate this, but I don't trust one to craft with the same level of care. Then again, we all think we're better than the robots at the things we care about most.
I understand the urge to throw mechanical writing at the bots. But a human will grasp the need to add a detail explaining the why of something when (the current) bots gloss over it. There's still nuance worth preserving.
by alsetmusic
3/31/2026 at 8:05:58 AM
> That makes it easier to understand what you are saying, and more easily see logical errors or omissions in it.Rubber ducking with a pencil, kinda.
by zimpenfish
3/31/2026 at 4:44:25 AM
So often have I started writing a commit message about why I’d done something this way, and realised a problem or thought of another approach, and ended up throwing away the entire change and starting from scratch.(Aside: you should probably write longer commit messages.)
by chrismorgan
3/31/2026 at 5:07:42 AM
Corollary: write the commit message first, implement things later! Not a joke, this is almost like TDD works. (TDD writes formal tests, which is much more involved.)by nine_k
3/31/2026 at 9:09:06 AM
Documentation-driven development.by teddyh
3/31/2026 at 1:24:58 PM
Second this. After having a chat with some coworkers, I've been attempting to write more thoughtful and longer commit messages. I had this exact experience yesterday after I had staged my commit and was writing the message. I realized that there was a better way to do this change and redid the whole thing.by sudo_rm
3/31/2026 at 2:29:51 AM
Im in a slight disagreement with our CTO about the value of writing acceptance criteria yourself. When I write my own acceptance criteria its a useful tool forcing me to think through how the system ought to work. Definitely in agreement that writing is an important tool for clarifying thinking, not just generating context.by gburgett
3/31/2026 at 1:14:52 AM
>I've long considered writing to be the "last step in thinking". I can't tell you how many times an idea, that was crystal clear in my mind, fell apart the moment I started writing and I realize there were major contradictions I needed to resolve. Likewise I also have numerous times where writing about something loosely and casually revealed to me something that fundamentally changed how I viewed a topic and really consolidated my thinking.I read somewhere that Thinking, Writing and Speaking engage different parts of your brain. Whatever the mechanism, I often resolve issues midway while writing a report on them.
by protocolture
3/31/2026 at 1:27:18 AM
I noticed that when speaking on a subject I tend to explain it in simple terms, but when writing, I tend to get bogged down in details, pedantry and technical language.I started publishing my writing recently and I too often fall back into "debugging my mental model" mode, which while extremely valuable for me, doesn't make for very good reading.
I guess the optimal sequence would be to spend a few sessions writing privately on a subject, to build a solid mental model, then record a few talks to learn to communicate it well.
-- Similarly, journaling on paper and with voice memos seems to give me a different perspective on the same problem.
by andai
3/31/2026 at 5:23:23 AM
I do this too, and i find this a great use for my LLMs. I write the full detail as a part of integrating. Claude or Copilot helps me craft the communication versions.by ohthehugemanate
3/31/2026 at 6:00:15 PM
> I've long considered writing to be the "last step in thinking". I can't tell you how many times an idea, that was crystal clear in my mind, fell apart the moment I started writing and I realize there were major contradictions I needed to resolve. Likewise I also have numerous times where writing about something loosely and casually revealed to me something that fundamentally changed how I viewed a topic and really consolidated my thinking.I've found that I instinctively try to work around this by thinking in an explicit inner voice; i.e. I imagine that I can hear my thoughts put into words and spoken to myself. (Actually speaking aloud is somehow too embarrassing, even if nobody is around to hear.)
It still doesn't quite seem to work as well as actually writing.
by zahlman
3/31/2026 at 11:42:21 AM
The Feynman method of solving problems puts a similar emphasis on writing:1. Write down the problem
2. Think really hard
3. Write down the solution
It's a bit tongue-in-cheek, but there is also truth to it. Step 1 is not optional and actually very important.
by mr_mitm
3/30/2026 at 11:55:34 PM
nicely distilled.however the education system has done a disservice of how critical thinking actually happens.
when you write - then try edit your thoughts (written material). the editing part helps you clarify things, bring truth to power ie. whether you're bullshitting yourself and want to continue or choose another path.
the other part - in a world of answers - critical thinking is a result of asking better questions.
writing helps one to ask better questions.
preferably if you write in a dialogue style.
by dzonga
3/31/2026 at 1:03:14 AM
If you drop the premise of writing, drop the premise that you need something well written. Just give me the same information you would have given the LLM.by ori_b
3/31/2026 at 1:42:40 AM
But a non well-written prompt is not a good prompt. What are you really going to do with a shit prompt? It's meta: we need better writers all the way down.by apsurd
3/31/2026 at 7:07:22 AM
Whatever the prompt is, it is still the only information of value reflecting actual decisions made.Everything coming out of LLM on any prompt is either someone else's decisions or same thing reworded in a different way.
by necovek
3/31/2026 at 2:13:09 AM
Yes. But if it's good enough for an LLM it's good enough for me.If you really feel the need, you can attach the LLM output as an appendix. I probably won't read it.
by ori_b
3/31/2026 at 5:03:26 AM
Do you really want five minutes of audio of me rambling, then some instructions for how to split it up and organise it?Plenty of people make LLMs make text longer, but writing a short accurate text with the essential points is much harder.
by tomjen3
3/31/2026 at 7:01:34 AM
What is the difference between you putting your 5 minute monologue into the LLM to summarize it versus me doing it?by anon-3988
3/31/2026 at 4:39:14 PM
I know what I'm trying to say, so I can sanity check the output. You can't, unless you listen to the monologue.That's why I disagree with people that say "just give me whatever you gave the LLM." That's only useful if you, the writer of the prompt, have no intention of looking at the LLM output before sending it.
by antasvara
3/31/2026 at 3:20:07 AM
Do you really want to read the whole conversation between the author and computer? I don't use AI to write prose but if I did I'd treat it like a critical editor so reading all that would not save you time.by esafak
3/31/2026 at 8:20:23 AM
Any time I stumbled on AI writing; in comments, work or articles, it was painfully obvious that not a single person has read it, including the author.by yoz-y
3/30/2026 at 9:44:30 PM
For your context, I'm an AI hater, so understand my assumptions as such.> The obvious best solution is to have your agent write release notes for your agent in the future to have context. No more tedious writing or reading, but also no missing context.
Why is more AI the "obvious" best solution here? If nobody wants to read your release notes, then why write them? And if they're going to slim them down with their AI anyway, then why not leave them terse?
It sounds like you're just handwaving at a problem and saying "that's where the AI would go" when really that problem is much better solved without AI if you put a little more thought into it.
by delusional
3/31/2026 at 8:23:54 AM
For release notes in particular, I think AI can have value. This is because more than a summary, release notes are a translation from code and more or less accurate summaries into prose.AI is good at translation, and in this case it can have all the required context.
Plus it can be costly (time and tokens) to both “prompt it yourself” or read the code and all commit logs.
by yoz-y
3/31/2026 at 7:11:20 AM
To me, the value I would look to extract feom LLMs is turn the code changes into user-readable, concise release notes.If you are coding with the help of LLMs, then release notes are your human-crafted prompt.
Basically, the intent is given as a decision somewhere, and that is human driven.
by necovek
3/31/2026 at 12:01:08 AM
What better solution do you have in mind? This scenario is AI being used as a tool to eliminate toil. It’s not replacing human creativity, or anything like that.If you have a problem with that, then you should also have a problem with computers in general.
But maybe you do have a problem with computers - after all, they regularly eliminate jobs, for example. In that case, AI is only special in its potentially greater effectiveness at doing what computers have always been used to do.
But most of us use computers in various ways even if we have qualms about such things. In practice, the same already applies to AI, and likely will for you too, in future.
by antonvs
3/31/2026 at 1:08:29 AM
It's not eliminating toil, it's externalizing it from the writer to the reader.If writing something is too tedious for you, at least respect my time as the reader enough to just give me the prompt you used rather than the output.
by bandrami
3/31/2026 at 1:44:11 AM
In a lot of my AI assisted writing, the prompt is an order of magnitude larger than the output.Prompt: here are 5 websites, 3 articles I wrote, 7 semi-relevant markdown notes, the invitation for the lecture I'm giving, a description of the intended audience, and my personal plan and outline.
Output: draft of a lecture
And then the review, the iteration, feedback loops.
The result is thoroughly a collaboration between me and AI. I am confident that this is getting me past writer blocks, and is helping me build better arcs in my writing and lectures.
The result is also thoroughly what I want to say. If I'm unhappy with parts, then I add more input material, iterate further.
I assure you that I spend hours preparing a 10_min pitch. With AI.
(This comment was produced without AI.)
by specvsimpl
3/31/2026 at 3:07:55 AM
Because it’s not totally clear from your comment: what part are you contributing in this process?by jimbokun
3/31/2026 at 2:02:49 AM
Great example. Just give me the links you would give to the LLM. I also have an LLM and can use it if I want to, or I can read the links and notes. But I have zero interest in reading or hearing a lecture that you yourself find too tedious to write.by bandrami
3/31/2026 at 3:02:25 AM
Performative nonsense.You have less interest in sifting through multiple articles and wiki pages sent to you by a stranger with a prompt than the one paragraph same stranger selected as their curated point.
And pretending like you’d act otherwise is precisely the kind of “anti ai virtue signaling” that serves as a negative mind virus.
AI is full of hype, but the delusion and head in sand reactions are worse by a mile
by aeon_ai
3/31/2026 at 3:16:29 AM
Then let him curate it as his central point. If he finds even that too tedious to do, I absolutely have no interest in reading the output of a program he fed the context to (particularly since I also have access to that program)by bandrami
3/31/2026 at 3:57:20 PM
> And pretending like you’d act otherwiseNo pretending here. I don't ever ask an LLM for a summary of something which I then send to people, because I have more respect for my co-workers than that. Nor do I want their (almost certainly inaccurate) LLM summary. It's the 2020s equivalent of "let me Google that for you": I can ask the bag of words to weigh in myself; if I'm asking a person it's because I want that person's thoughts.
by bigstrat2003
3/31/2026 at 5:55:04 AM
The original comment was saying that the AI would be both the writer now and the reader, in future. That's how the toil is eliminated. Instead of reading or searching through a series of release notes, you can just ask questions about what you're specifically looking for.> If writing something is too tedious for you, at least respect my time as the reader
"If comprehending something is too tedious for you..."
Seriously, don't jump to indignant rhetoric before you're sure you've understood the discussion.
by antonvs
3/31/2026 at 6:21:23 AM
In this scenario the ai _writer _ is redundant.You might as well publish the prompt you were going to give to the writer and have the ai reader consume that directly.
Assuming you think any of this is a good idea of course. Personally I wouldn’t trust ai to interpret release notes for anything that i cared about
by funnybeam
3/31/2026 at 8:34:14 AM
I responded to a similar point here: https://news.ycombinator.com/item?id=47584324The original commenter was essentially describing something similar to what good agent harnesses already rely heavily on.
by antonvs
3/31/2026 at 6:27:38 AM
What's the point of the AI writer in that use case? Just send your prompt to my AI. And for that matter since prompting is in plain English, why not just send your prompt directly to me, and I'll choose to prettify it through an AI or not as I prefer.by bandrami
3/31/2026 at 8:32:18 AM
The point is that it summarizes the context. It’s an important optimization, because context and tokens are both limited resources. I do something similar all the time when working with coding models. You’ve done a bunch of work, ask it to summarize it to the AGENTS.md file.The more fully automated agents rely heavily on this approach internally. The best argument against it is that good harnesses will do something like this automatically, so you don’t need to explicitly do it.
Sending you the prompt wouldn’t help at all, because you’d have to reconstruct the context at the time the notes were written. Even just going back in version control history isn’t necessarily enough, if the features were developed with the help of an agent.
by antonvs
3/31/2026 at 8:35:19 AM
But I also have access to an AI that can summarize content. So why not just send me the content and the prompt you used? Or just the content, so I can summarize it however I want?by bandrami
3/31/2026 at 2:21:39 AM
[dead]by heyethan
3/31/2026 at 7:32:32 AM
Obvious better solution is to either a.) not write those release notes b.) try to figure out release notes format and process that leads to useful release notes. Once it is useful, you can decide to automate it or not - and measure whether automation is still achieving the goal.What OP did was "we lacked communication, then created ineffective process that achieved nothing, so we automated the ineffective process and pay third party for doing it".
If you pay tokens for release notes that nobody reads, they you may just ... not pay tokens.
by watwut
3/30/2026 at 10:29:04 PM
This is kind of a fundamental issue with release notes. They are broadcasting lots of information, and only a small amount of information is relevant to any particular user (at least in my experience).If I had a technically capable human assistant, I would have them filter through release notes from a vendor and only give me the relevant information for APIs I use. Having them take care of the boring, menial task so I can focus on more important things seems like a no brainer. So it seems reasonable to me to have an AI do that for me as well.
by dahfizz
3/31/2026 at 1:05:35 AM
I read a lot of release notes in my job and the idea that that is some kind of noticeable time sink that needs to be streamlined is bizarre to me. Just read the notes.by bandrami
3/31/2026 at 7:13:44 AM
If your assistant is technical enough to know which parts apply to you and which do not, they likely don't need you to do the rest of the job either.An LLM could do this by looking over the full codebase and release notes and do a shorter summary, bit probably at the cost of many tokens today.
by necovek
3/31/2026 at 3:04:49 AM
Or you could Ctrl-F.by jimbokun
3/31/2026 at 12:26:13 PM
Sometimes PRDs might be boilerplate, but there’s been times where I sat down thinking “I can’t believe these dumbasses want to foo a widget”, but when writing the user story I get into their heads a little and I realize that widgets are useless if they can’t be foo’d. It’s not the same if AI is just telling me, because amongst the fire hose of communication and documentation flying at me, AI is just another one. Writing it myself forces me to actually engage, even if only a little more than at a shallow level.by clickety_clack
3/31/2026 at 11:42:59 AM
> I've long considered writing to be the "last step in thinking".I think it's often useful to use writing all the way through the process of thinking through something, rather than just at the end.
by jamesrcole
3/31/2026 at 12:41:20 PM
Yes, not all "writing" is actually thinking, and a lot of what we call writing at work is really just ritualized context transferby KolibriFly
3/30/2026 at 9:55:40 PM
> writing about something loosely and casually revealed to me something that fundamentally changed how I viewed a topic and really consolidated my thinkingYou see the same thing in teaching, perhaps even more because of the interactive element. But the dynamic in any case is the same. Ideas exist as a kind of continuous structure in our minds. When you try to distill that into something discrete you're forced to confront lingering incoherence or gaps.
by user3939382
3/30/2026 at 11:51:57 PM
I never learned a subject faster than when I was suddenly forced to teach it!by WCSTombs
3/31/2026 at 12:13:25 AM
Same here. And when you encourage students to ask good questions, that goes double ... you're forcedd to see how important their new perspectives are, and to create your own!by 8bitsrule
3/31/2026 at 9:11:37 AM
I think of it not as "last step in thinking", but as "first contact with reality". Your mind is amazing and lying to you, filling in gaps and telling you everything is ok. The moment you try to export what's in your mind, math stops mathing. So writing is an important exercise.by figassis
3/30/2026 at 10:10:06 PM
> agent write release notes for your agent in the future...I have been going back to verbose, expansive inline comments. If you put the "history" inline it is context, if you stuff it off in some other system it's an artifact. I cant tell you how many times I have worked in an old codebase, that references a "bug number" in a long dead tracking system.
by zer00eyz
3/30/2026 at 10:32:31 PM
But how do you deal with communicating that some library you maintain has a behavior change? People already need to know to look at your code in order to read your comments.by dahfizz
3/30/2026 at 11:49:16 PM
> communicating ... PeopleEnd users? Other Devs? These two groups are not the same.
As an end user of something, I dont care about the details of your internal refactor, only the performance, features and solutions. As a dev looking at the notes there is a lot more I want to see.
The artifact exists to inform about what is in this version when updating. And it can come easily from the commit messages, and be split for each audience (user/dev).
It doesn't change the fact that once your in the code, that history, inline is much much more useful. The commit message says "We fixed a performance issue around XXX". The inline comment is where you can put in a reason FOR the choice made.
One comes across this pattern a lot in dealing with data (flow) or end user inputs. It's that ugly change of if/elseif/elesif... that you look at and wonder "why isnt this a simple switch" because thats what the last two options really are. Having clues as inline text is a boon to us, and to any agent out there, because it's simply context at that point. Neither of us have to make a (tool) call to look at a diff, or a ticket or any number of other systems that we keep artifacts in.
by zer00eyz
3/31/2026 at 6:46:03 AM
> I think it's going to be awhile before the full impact of AI really works it's [sic] way through how we work.This was definitely not written by AI. Granted their many drawbacks, present-day AI engines avoid this classic grammatical error.
However! Future, more advanced AI engines will slather their prose with this kind of error, to conceal its origins.
by lutusp
3/30/2026 at 10:24:11 PM
> For these cases, I think we should just drop the premise altogether that you're writing.Sure.
> If you need to write a proposal for something as a matter of ritual, give it AI. If you're documenting a feature to remember context only (and not really explain the larger abstract principles driving it), it's better created as context for an LLM to consume.
No, no, no. You don't need to take that step. Whatever bullet-point list you're feeding in as the prompt is the relevant artifact you should be producing and adding to the bug, or sharing as an e-mail, or whatever.
by vkou
3/31/2026 at 1:43:32 PM
[dead]by bhekanik