3/9/2026 at 12:27:48 PM
The easiest thing to do is to have the LLM leave its own comments.This has several benefits because the LLM is going to encounter its own comments when it passes this code again.
> - Apply comments to code in all code paths and use idiomatic C# XML comments
> - <summary> be brief, concise, to the point
> - <remarks> add details and explain "why"; document reasoning and chain of thought, related files, business context, key decisions.
> - <params> constraints and additional notes on usage
> - inline comments in code sparingly where it helps clarify behavior
(I have something similar for JSDoc for JS and TS)Several things I've observed:
1. The LLM is very good at then updating these comments when it passes it again in the future.
2. Because the LLM is updating this, I can deduce by proxy that it is therefore reading this. It becomes a "free" way to embed the past reasoning into the code. Now when it reads it again, it picks up the original chain-of-thought and basically gets "long term memory" that is just-in-time and in-context with the code it is working on. Whatever original constraints were in the plan or the prompt -- which may be long gone or otherwise out of date -- are now there next to the actual call site.
3. When I'm reviewing the PR, I can now see what the LLM is "thinking" and understand its reasoning to see if it aligns with what I wanted from this code path. If it interprets something incorrectly, it shows up in the `<remarks>`. Through the LLM's own changes to the comments, I can see in future passes if it correctly understood the objective of the change or if it made incorrect assumptions.
by CharlieDigital
3/9/2026 at 3:18:24 PM
In my experience, LLM-added comments are too silly and verbose. It's going to pollute its own context with nonsense and its already limited ability to make sense of things will collapse. LLMs have plenty of random knowledge which is occasionally helpful, but they're nowhere near the standard of proper literacy of even an ordinary skilled coder, let alone Dr. Knuth who defined literate programming in the first place.by zozbot234
3/9/2026 at 3:47:49 PM
The output of an LLM is a reflection of the input and instructions. If you have silly and verbose comments, then consider improving your prompt.by CharlieDigital
3/9/2026 at 6:48:03 PM
Almost nothing in a Claude Code session has to do with "your prompt", it works for an hour afterwards and mostly talks to itself. I've noticed if you give it small corrections it will leave nonsensical comments referring to your small correction as if it's something everyone knows.by astrange
3/9/2026 at 7:02:42 PM
It has everything to do with your prompt and why Claude Code has a plan mode: because the quality of your planning, prompting, and inputs significantly affects the output.Your assertion, then, is that even a 1 sentence prompt is as good as a 5 section markdown spec with detailed coding style guidance and feature, by feature specification. This is simply not true; the detailed spec and guidance will always outperform the 1 sentence prompt.
by CharlieDigital
3/9/2026 at 12:44:02 PM
How do you deal with the comments sometimes being relatively noisy for humans? I tend to be annoyed by comments overly referring to a past correction prompt and not really making sense by themselves, but then again this IS probably the highest value information because these are exactly the things the LLM will stumble on again.by solarkraft
3/9/2026 at 12:56:11 PM
> How do you deal with the comments sometimes being relatively noisy for humans?
To extents, that is a function of tweaking the prompt to get the level of detail desired and signal/vs noise produced by the LLM. e.g. constraining the word count it can use for comments.We have a small team of approvers that are reviewing every PR and for us, not being able to see the original prompt and flow of interactions with the agent, this approach lets us kind of see that by proxy when reviewing the PR so it is immensely useful.
Even for things like enum values, for example. Why is this enum here? What is its use case? Is it needed? Having the reasoning dumped out allows us to understand what the LLM is "thinking".
(Of course, the biggest benefit is still that the LLM sees the reasoning from an earlier session again when reading the code weeks or months later).
by CharlieDigital
3/9/2026 at 2:23:17 PM
Inline comments in function body: for humans.Function docs: for AI, with clear trigger (“use when X or Y”) and usage examples.
by stingraycharles
3/9/2026 at 2:35:20 PM
I really hate its tendency to leave those comments as well. I seem to have coached it out with some claude.md instructions but they still happen on occasion.by JamesSwift
3/9/2026 at 2:49:46 PM
Interesting observation. After a human is done writing code, they still have a memory of why they made the choices they made. With an LLM, the context window is severely limited compared to a brain, so this information is usually thrown away when the feature is done, and so you cannot go back and ask the LLM why something is the way it is.by ulrikrasmussen
3/9/2026 at 3:08:08 PM
Yup; in the moment, you can just have the LLM dump its reasoning into the comments (we use idiomatic `<remarks></remarks>` for C# and JSDoc `@remarks`).Future agents see the past reasoning as it `greps` through code. Good especially for non-obvious context like business and domain-level decisions that were in the prompt, but may not show in the code.
I can't prove this, but I'm also guessing that this improves the LLM's output since it writes the comment first and then writes the code so it is writing a mini-spec right before it outputs the tokens for the function (would make an interesting research paper)
by CharlieDigital