3/12/2026 at 3:39:14 AM
Anecdote time! I had Codex GPT 5.4 xhigh generate a Rust proc macro. It's pretty straightforward: use sqlparser to parse a SQL statement and extract the column names of any row-producing queries.It generated an implementation that worked well, but I hated the ~480 lines of code. The structure and flow was just... weird. It was hard to follow and I was seriously bugged by it.
So I asked it to reimplement it with some simplifications I gave it. It dutifully executed, producing a result >600 lines long. The flow was simpler and easier to follow, but still seemed excessive for the task at hand.
So I rolled up my sleeves and started deleting code and making changes manually. A little bit later, I had it down to <230 lines with a flow that was extremely easy to read and understand.
So yeah, I can totally see many SWE-bench-passing PRs being functionally correct but still terrible code that I would not accept.
by cornstalks
3/12/2026 at 4:47:10 AM
If you've got some time, I highly recommend going through the exercise of trying to change the prompt in a way that would produce code similar to what you've achieved manually. Doing a similar exercise really helps to improve agent prompting skills, as it shows how changing parts of the prompt influences the result.by SerCe
3/12/2026 at 5:45:33 AM
I haven’t had any luck prompting LLMs to “have taste.” They seem to over fixate on instructions (e.g. golfing when asked for concise code) or require specifying so many details and qualifications that the results no longer generalize well to other problems.Do you have any examples or resources that worked well for you?
by foltik
3/12/2026 at 6:47:24 AM
Yeah prompting doesn't work for this problem because the entire point of an LLM is you give it the what and it outputs the how. The more how that you have to condition it with in the prompt, the less profitable the interaction will be. A few hints is OK, but doing all the work for the LLM tends to lead to negative productivity.Writing prompts and writing code takes about the same amount of time, for the same amount of text, plus there's the extra time that the LLM takes to accomplish the task, and review time afterwards. So you might as well just write the code yourself if you have to specify every tiny implementation detail in the prompt.
by zarzavat
3/12/2026 at 7:42:34 AM
Makes me think of this commitstrip comic: https://i.xkqr.org/itscalledcode.jpg (mirrored from the original due to TLS issues with the original domain.)A guy with a mug comes up to a person standing with their laptop on a small table. The mug guy says, "Some day we won't even need coders any more. We'll be able to just write the specification and the program will write itself."
Guy with laptop looks up. "Oh, wow, you're right! We'll be able to write a comprehensive and precise spec and bam, we won't need programmers any more!"
Guy with mug takes a sip. "Exactly!"
Guy with laptop says, "And do you know the industry term for a project specification that is comprehensive and precise enough to generate a program?"
"Uh... no..."
"Code. It's called code."
by kqr
3/12/2026 at 3:10:27 PM
You know, this makes me wonder... is anybody actually prompting LLMs with pseudocode rather than an English specification? Could doing so result in code that that's more true to the original pseudocode?by Sophira
3/13/2026 at 7:16:27 PM
I’m not sure if it went anywhere but I remember there was this attempt at one point called Sudolang:https://medium.com/javascript-scene/sudolang-a-powerful-pseu...
by dyarosla
3/13/2026 at 1:55:05 PM
You can give the macro-structure using stubs then ask the LLM to fill in the blanks.The problem is that it doesn't work too well for the meso-structure.
Models tend to be quite good at the micro-structure because they've seen a lot of it already, and the macro-structure can easily be promoted, but the levels in between are what distinguishes a good vs bad model (or human!).
by zarzavat
3/12/2026 at 10:45:50 AM
Goodhart's Law of Specification: When a spec reaches a state where it's comprehensive and precise enough to generate code, it has fallen out of alignment with the original intent.Of course there are some systems where correctness is vital, and for those I'd like a precise spec and proof of correctness. But I think there's a huge bulk of code where formal specification impedes what should be a process of learning and adapting.
by datastoat
3/12/2026 at 11:16:59 AM
My dream antiprogram is a specification compiler that interprets any natural language and compiles it to a strict specification. But on any possible ambiguity it gives an error. ?
This terse error was found to be necessary as to not overwhelm the user with pages and pages of decision trees enumerating the ambiguities.
by keybored
3/12/2026 at 1:49:24 PM
Openspec does this. But instead of "?" it has a separate Open Questions section in the design document. In codex cli, if you first go in plan mode it will ask you open questions before it proceeds with the rest.The UX is there, for small things it does work for me, but there is still something left for LLMs to truly capture major issues.
by newswasboring
3/12/2026 at 2:10:37 PM
Bless our interesting times.by keybored
3/12/2026 at 8:24:24 AM
the goal would be to write it a reusable prompt. this is what AGENT.md is for.by FeepingCreature
3/12/2026 at 12:46:41 PM
> the entire point of an LLM is you give it the what and it outputs the howI'm still struggling to move past the magic trick of guessing what characters come next to ascribe understanding of "how" and implying understanding?
by pricechild
3/12/2026 at 7:29:46 AM
> Do you have any examples or resources that worked well for you?Using this particular example, if you simply paste the exact code into the prompt, the model should able to reproduce it. Now, you can start removing the bits and see how much you can remove from the prompt, e.g. simplify it to pseudocode, etc. Then you can push it further and try to switch from the pseudocode to the architecture, etc.
That way, you'll start from something that's working and work backwards rather than trying to get there in the absence of a clear path.
by SerCe
3/12/2026 at 7:57:06 AM
That’s an interesting approach, but what do you learn from it that is applicable to the next task? Do you find that this eventually boils down to heuristics that generalize to any task? It sounds like it would only work because you already put a lot of effort into understanding the constraints of the specific problem in detail.by tobr
3/12/2026 at 9:31:08 AM
What worked for me was Gemini 3 Pro (I guess 3.1 should work even better now) with the prompt "This code is unnecessarily complicated. Simplify it, but no code golf". This decreased code size by about 60 %. It still did a bit of code-golfing, but it was manageable.It is important to start a new chat so the model is not stuck in its previous mindset, and it is beneficial to have tests to verify that the simplified code still works as it did before.
Telling the model to generate concise code did not work for me, because LLMs do not know beforehand what they are going to write, so they are rarely able to refactor existing code to break out common functionality into reusable functions. We might get there eventually. Thinking models are a bit better at it. But we are not quite there yet.
by johndough
3/12/2026 at 12:46:40 PM
I wonder if it helps at all to first tell the agent to write the APIs/function signatures, then second tell the agent to implement them.by catlifeonmars
3/12/2026 at 10:34:45 AM
I have a stupid solution for this which is working wonders. It does not help to tell the LLM "don't do this pattern". I literally make it write a regex based test which looks for that pattern and fails the test.For example I am developing a game using GDscript, LLMs (including codex and claude) keep making scripts with no classnames and then loading them with @preload. Hate this, and its explicitly mentioned in my godot-development skill. What agents can't stand is a failing test. Feels a bit like enforcing rules automatically.
This is a stupid idea but it works wonders on giving taste to my LLM. I wonder if I should open source that test suite for other agentic developers.
by newswasboring
3/13/2026 at 8:01:04 AM
I really should spend some time analyzing what I do to get the good output I get..One thing that is fairly low effort that you could try is find code you really like and ask the model to list the adjectives and attributes that that code exhibits. Then try them in a prompt.
With LLMs generally you want to adjust the behavior at the macro level by setting things like beliefs and values, vs at the micro level by making "rules".
By understanding how the model maps the aspects that you like about the code to language, that should give you some shorthand phrases that give you a lot of behavioral leverage.
Edit: Better yet.. give a fresh context window the "before" and "after" and have it provide you with contrasting values, adjectives, etc.
by XenophileJKO
3/12/2026 at 12:40:33 PM
Concise isn't specific enough: I've primed mine on basic architecture I want: imperative shell/functional core, don't mix abstraction levels in one function, each function should be simple to read top-to-bottom with higher level code doing only orchestration with no control flow. Names should express business intent. Prefer functions over methods where possible. Use types to make illegal states unrepresentable. RAII. etc.You need to think about what "good taste " is to you (or find others who have already written about software architecture and take their ideas that you like). People disagree on what that even means (e.g. some people love Rails. To me a lot of it seems like the exact opposite of "good taste").
by ndriscoll
3/12/2026 at 11:51:44 AM
I spend much more time refactoring that creating features (though, it is getting better with each model). My go-to approach is to use Claude Code Opus 4.6 for writing and Gemini 3.1 Pro for cleaning up. I feel that doing it just one-stage is rarely enough.A lot of prompts about finding the right level of abstraction, DRY, etc.
An earlier example (Opus 4.5 + Gemini 3 Pro) is here: https://github.com/stared/sc2-balance-timeline
I tried as well to just use Gemini 3 Pro (maybe the model, maybe the harness) it was not nearly as good as writing, but way better at refining.
by stared
3/12/2026 at 10:10:24 AM
I actually don’t think golfing is such a bad thing, granted it will first handle the low hanging fruits like variable names etc, but if you push it hard enough it will be forced to think of a simpler approach. Then you can take a step back and tell it to fix the variable names, formatting etc. With the caveat that a smaller AST doesn’t necessarily mean simpler code, but it’s a decent heuristic.by brap
3/12/2026 at 3:45:41 PM
Have you tried meta-prompts e.g. "Rewrite the prompt to improve the perceived taste and expertise of the author"by irthomasthomas
3/12/2026 at 7:13:09 AM
I appreciate that your message is a good-natured, friendly tip. I don't mean for the following to crap on that. I just need to shout into the void:If I have some time, the last thing I want to do with it is sharpen prompting skills. I can't imagine a worse or more boring use of my time on a computer or a skill I want less.
Every time I visit Hacker News I become more certain that I want nothing to do with either the future the enthusiasts think awaits us or the present that they think is building towards it.
by globnomulous
3/12/2026 at 7:33:01 AM
While I somewhat understand the impact on the craft, the agents have allowed me to work on the projects that I would never have had enough time to work on otherwise.by SerCe
3/12/2026 at 7:30:51 AM
You dont need to learn anything, it needs to learn from you. When it fails, don't correct it out of bounds, correct it in the same UI. At the end say "look at what I did and create a proposed memory with what you learned" and if it looks good have it add it to memories.by vasco
3/12/2026 at 10:37:08 AM
> change the prompt in a way that would produce code similar to what you've achieved manually.The problem is that I don't know what I'll achieve manually before attempting the task.
by laserlight
3/13/2026 at 4:28:41 PM
This better reflects what I thought about the other day. You either, let clankers do its thing and then bake in your implementation on top, you think it through and make them do it, but at the end of the day you've still gotta THINK of the optimal solution and state of the code at which point, do clankers do anything asides from saving you a bunch of keypresses, and maybe catching a couple of bugs?by Bridged7756
3/12/2026 at 11:47:53 AM
Also useful to encode into the steering of your platform. The incremental aspect of many little updates really help picking up speed by reducing review time.Big bang approach could be a start, but a lot of one line guidance from specific things you dont want to see stack up real fast.
by avereveard
3/13/2026 at 8:39:24 AM
My mildly amusing anecdote is that, whenever Claude Code produces something particularly egregious, I often find it sufficient to reply with just "wtf?" for it to present a much more sensible version of the code (which often needs further refinement, but that's another story...)by aix1
3/12/2026 at 11:24:36 AM
Indeed. I have a few colleages and they constantly try to push these long convoluted functions which look like is_done = False
while not is_done:
if pattern1:
...
if pattern2:
...
if matched == "SUCCESS":
is_done = True
break
if pattern3:
...
It's usually correct but extremely hard to follow and reminds of the good old asm code with convoluted goto's.And the colleages tend to do reviews with the help of the agents so they don't even care to read this mess.
by ernst_klim
3/12/2026 at 10:43:47 AM
I reported a similar case of mine several days ago [0]. I was able to achieve better quality than Claude Code's 624 lines of spaghetti code in 334 lines of well-designed code. In a previous case, I rewrote ~400-line LLM generated code in 130 lines.by laserlight
3/12/2026 at 6:32:56 AM
Had the same problem with a Python project. Just for the hell of it I tried to have it implement a simple version of a proxy I've made in the past. What was finally produced "technically" worked, but it was a mess. It suppressed exceptions all over the place, it did weird shit with imports it couldn't get to work, and the way it managed connection state was bizarre.It has a third year college students approach to "make it work". It can't take a step back and reevaluate a situation, or determine a new path forward, it just hammers away endlessly with whatever it's trying until it can technically be called "correct".
by scuff3d
3/12/2026 at 7:45:43 AM
When I benchmark LLMs on text adventures, they reason like four-year olds but have the worlds largest vocabulary and infinite patience. I'm not surprised this is how they approach programming too.by kqr
3/12/2026 at 8:32:16 AM
>It has a third year college students approach to "make it work". It can't take a step back and reevaluate a situation, or determine a new path forward, it just hammers away endlessly with whatever it's trying until it can technically be called "correct".OH! Yeah I think this is the exact bad feeling I've gotten whenever I've tried testing these things before, except without clear and useful feedback like compiler error messages or something. I remember when I used to code/learn like that early on and...it's not fun now. I also don't think it's really solvable
by duskdozer
3/12/2026 at 7:43:24 PM
Yeah it's really funny to watch. They'll get stuck in a specific method call or a specific import. Even if you tell them to read the docs. Doesn't matter if there's a better approach, or that method only exists for some obscure edge case, or the implementation runs counter to the design of the API, if the can hammer the round peg into the square hole, they'll do it.They also just... Ignore shit. I have explicit rules in the repo I'm using an agent for right now, that day it is for planning and research only, that unless asked specifically it should not generate any code. It still tries to generate code 2 or 3 times a session.
by scuff3d
3/12/2026 at 8:27:40 AM
We’re heading for a world of terrible code that can only be maintained by extremely good coding agents and are pretty much impossible for a human to really understand.The days of the deep expert, who knew the codebase inside out and had it contained in their head, are coming to an end.
by iamflimflam1
3/12/2026 at 10:40:21 AM
> We’re heading for a world of terrible code that can only be maintained by extremely good coding agents and are pretty much impossible for a human to really understand.
I once figured out the algorithm of the program written in one-instruction ISA. I think the instruction was three-address subtraction.In my opinion, you overestimate the ability of coding agents to, well, code and underestimate the ability of humans to really understand code.
The chart in the article we discuss appears to plateau if one excludes sample from 2024-07. So, we are not quite heading, we are plateauing, if I may.
by thesz
3/12/2026 at 8:36:44 AM
that was the exception not the ruleby pas
3/12/2026 at 8:54:25 AM
Probably more like the long tail of software - software that was created for a particular purpose in a particular domain by a single person in the company who also happened to know programming - maybe just as Excel macros.I strongly assume the long tail is shifting and expanding now and will eventually mostly be software for one-off purposes authored by people who don't know how to code, and probably have a poor understanding of how it actually works.
by tveita
3/12/2026 at 9:59:07 AM
Then this is an era of snake oil because customers aren’t going to put up with that shit for long.by hinkley
3/12/2026 at 11:16:13 AM
They’ve been putting up with crappy software for two decades(at least).by Gud
3/13/2026 at 4:06:52 PM
Crappy software that works. Unfortunately for all clanker evangelists, you need humans to review all the clanker spaghetti, manage infra, do firefighting and translate business requirements into working systems (gross oversimplification of what that entails). Just code review alone takes up a huge toll on our bandwidth.by Bridged7756
3/12/2026 at 11:53:40 AM
Five decades but I’m talking about an unprecedented degree of crappiness.by hinkley
3/12/2026 at 6:51:16 PM
I had a similar experience yesterday. Was working on some async stream extensions. Wrote a couple proofs of concept to benchmark, and picked one based on the results. I almost never use LLMs to write code, but out of curiosity, asked whatever the newest claude is to make it with all the real prod requirements, and it spit out over 400 lines of code, lots of spaghetti, with strange flow and a lot of weird decisions. Wrote it myself with all the same functionality in right around 170 lines.Also had a similar experience in the past weeks reviewing PRs written with LLMs by other engineers in languages they don't know well, one in rust and one in bash. Both required a lot of rounds of revision and a couple of pairing sessions to get to a point where we got rid of the extraneous bits and made it read normally. I'm glad the tool gave these engineers the confidence to work in areas they wouldn't normally have felt comfortable contributing to, but man do I hate the code that it writes.
by mplanchard
3/12/2026 at 1:04:40 PM
Once my code exists and passes test, I generally move on to having it iteratively hunt for bugs, security issues, and DRY code reduction opportunities until it stops finding worthwhile ones.This doesn't always work as well as I'd like, but largely does enough. Conversely, doing as I go has been a waste of time.
by lmeyerov
3/12/2026 at 11:22:02 AM
Happens all the time. I usually propose a details structure myself (e.g. do it in three phases, add 3 functions + an orchestrator, make sure structure is valid before writing the function bodies), or iterate on detailed plan before implementing code.Now some people argue that terrible code is fine nowadays, because humans won't read it anymore...
by yodsanklai
3/12/2026 at 7:44:26 AM
I wonder why they fail this specific way. If you just let them do stuff everything quickly turns spaghetti. They seem to overlook obvious opportunities to simplify things or see a pattern and follow through. The default seems to be to add more, rather than rework or adjust what’s already in place.by tobr
3/12/2026 at 8:46:15 AM
I suspect it has something to do with a) the average quality of code in open source repos and b) the way the reward signal is applied in RL post-training - does the model face consequences of a brittle implementation for a task?I wonder if these RL runs can extend over multiple sequential evaluations, where poor design in an early task hampers performance later on, as measured by amount of tokens required to add new functionality without breaking existing functionality.
by samdjstephens
3/12/2026 at 11:04:41 AM
Yeah I've been wondering if the increasing coding RL is going to draw models towards very short term goals relative to just learning from open source code in the wildby foo42
3/12/2026 at 12:58:07 PM
To me this seems like a natural consequence of the next-token prediction model. In one particular prompt you can’t “backtrack” once you’ve emitted a token. You can only move forwards. You can iteratively refine (e.g the agent can one shot itself repeatedly), but the underlying mechanism is still present.I can’t speak for all humans, but I tend to code “nonlinearly”, jumping back and forth and typically going from high level (signatures, type definitions) to low level (fill in function bodies). I also do a lot of deletion as I decide that actually one function isn’t needed or if I find a simpler way to phrase a particular section.
Edit: in fact thinking on this more, code is _much_ closer to a tree than sequence of tokens. Not sure what to do with that, except maybe to try a tree based generator which iteratively adds child nodes.
by catlifeonmars
3/12/2026 at 3:06:59 PM
This would make sense to me as an explanation when it only outputs code. (And I think it explains why code often ends up subtly mangled when moved in a refactoring, where a human would copy paste, the agent instead has to ”retype” it and often ends up slightly changing formatting, comments, identifiers, etc.)But for the most part, it’s spending more tokens on analysis and planning than pure code output, and that’s where these problems need to be caught.
by tobr
3/14/2026 at 3:48:20 PM
I feel like planning is also inherently not sequential. Typically you plan in broad strokes, then recursively jump in and fill in the details. On the surface it doesn’t seem to be all that much different than codegen. Code is just more highly specified planning. Maybe I’m misunderstanding your point?by catlifeonmars
3/12/2026 at 7:48:36 AM
All it does is generate soup. Some of which may taste good.There is no thinking, no matter what marketing tells you.
by OtomotO
3/12/2026 at 9:07:12 AM
LLMs are next token predictors. Their core functionality boils down to simply adding more stuff.by Antibabelic
3/12/2026 at 12:29:13 PM
They do what you tell them to. If you regularly tell them to look for opportunities to clean up/refactor the code, they will.by logicchains
3/12/2026 at 5:37:29 AM
Yeah I had a similar experience on a smaller scale, reducing a function from 125 lines to 25.by mvanzoest
3/12/2026 at 9:01:36 AM
xhigh effort is actually pretty terrible for 5.2/5.3/5.4 models. Stick to medium/high as it overthinks less.by cbg0
3/12/2026 at 7:09:53 AM
Very familiar experienceby jlandersen
3/12/2026 at 7:11:15 AM
[dead]by iijaachok