4/8/2026 at 9:12:52 PM
Liked the article in general, but> These apps will win awards at the next all-hands. In two years they’ll be unmaintainable tech debt some poor soul inherits and rewrites from scratch.
Huge assumption/prediction that I think is actually just wrong. There's this weird assumption from a certain crowd, never justified or explained, that tech debt accrued by AI is now, and will forever be, impossible for AI to address, and will for some reason require humans to fix. Working at pace with agents I accrue tech debt every day, then go through the code nightly, again with agents, to clean and tidy everything up.
The more I see this view espoused the more bizzare it seems. People's assumptions seem to be "if AI couldn't one shot this perfectly the first time, then it's useless to try to have it go back over the codebase and identify and address issues". This doesn't match my personal experience at all, second or third passes over code with CC or Codex are almost always helpful and weed out critical issues, but I'm open to hearing from the rest of the HN crowd on their experiences on this.
by gbnwl
4/8/2026 at 9:17:28 PM
Tech debt used here is likely a catch-all term, and you're disagreeing, reasonably so, with one definition.I think human understanding of the surface area of a company is already very unwieldy. AI balloons the surface area. at some point using more AI to solve AI is reasonable! But to whatever extent a human needs to interface and manage this world, that's the accrued debt.
by apsurd
4/8/2026 at 9:18:44 PM
AIs don’t produce well organized code. They duplicate effort, which is tech debt. Maybe one day they will be able to clear their own tech debt. And who knows, maybe they’ll still be heavily subsidized by VC money then.by hperrin
4/8/2026 at 11:57:07 PM
You can organise the code well once, template that and put guardrails in place for it to follow the structure you and the team have agreed is good. The engineering task becomes building the system that is capable of building the system to a high standard.by latentsea
4/9/2026 at 9:37:11 AM
It will usually just keep on adding workarounds for workarounds. Tech debt increases.by vrighter
4/8/2026 at 11:45:02 PM
Having them clear it is trivial. I have my harness refactor automatically on a steady cadence, something I could never afford to take the time to do manually.by vidarh
4/9/2026 at 4:12:17 AM
So you have an AI refactor AI generated code? What am I missing here, if AI is the cause of the tech debt because it doesn't write great code, won't you just end up with more tech debt if you ask AI to refactor it?by patch_dev
4/9/2026 at 5:46:58 AM
If a human produces tech debt, do you think a human can't refactor?Most of the time a human works over code multiple times, and still produces tech debt.
Give an AI agent enough time, by prompting it multiple times, and explicit instructions to look for and address tech debt of various forms, and it will.
by vidarh
4/9/2026 at 8:01:29 AM
Yeah I must be missing something again. Comparing human to AI here seems to be fundamentally wrong. A human will learn over time and improve their mental model of a problem and ability to code. An AI agent for the most part is fixed by its model. I just don't see how pointing an agent at AI generated code to refactor without direct human guidance results in better code.Maybe you can describe what the various forms of tech debt are that you are talking about?
by patch_dev
4/9/2026 at 8:31:51 AM
> Yeah I must be missing something again. Comparing human to AI here seems to be fundamentally wrong. A human will learn over time and improve their mental model of a problem and ability to code. An AI agent for the most part is fixed by its model. I just don't see how pointing an agent at AI generated code to refactor without direct human guidance results in better code.There is no need to improve their mental model of a problem and ability to code to recognise the refactoring opportunities that already exists in the code. It only takes a sufficient skill, and effort invested on refactoring. The way to get a model to invest that effort is to ask it. As many times as you're willing to.
> Maybe you can describe what the various forms of tech debt are that you are talking about?
Any. Whether or not you need to prompt much to address it depends on consistency. In general I have a simple agent whose instructions are just to look for opportunities to refactor, and do one targeted refactor per run. All the frontier models knows well enough what good looks like that it is unnecessary to give it more than that.
The best way of convincing yourself of this, is to try it. Ask Claude Code or Codex to "Explore the code base and create a plan for one concrete refactor that improves the quality of the code. The plan should include specific steps, as well as a test plan." Repeat as many times as you care to, or if in Claude Code, run /agents and tell Claude Code you want it to create an agent to do that. Then tell it to invoke it however many times you want to try.
by vidarh
4/8/2026 at 9:20:07 PM
This is just false.by kvirani
4/8/2026 at 9:18:58 PM
This also seems to implicitly assume that ai models won't get better - a bet I am not willing to make currently..by deklesen
4/8/2026 at 9:25:36 PM
Agreed. The confidence people have to predict what these tools will be capable of two years down the line, when it's barely been over a year since Claude Code was first released, is astounding.by gbnwl
4/8/2026 at 9:27:16 PM
Models get better with money (reinvestment).But if there aren't enough returns soon the money will eventually dry up for OAI and Anthropic and Google will not be trusted with their cash balance.
Its amazing how people here think that money is a play-thing and this dance can go on forever. It cant and wont and the fear-induced marketing doesnt work forever either.
by e3df
4/9/2026 at 1:59:56 AM
This is a false equivalence. Models get better with more & better data.Both more data and better data are very expensive. Procuring... Handling... All of the above...
You can spend bottomless piles of cash and by not doing the right things not get there. I can count on one hand the number of times I've seen business/investor incentives line up with r&d incentives.
There's no guarantee that there is enough or good-enough data, regardless of how much money you have.
by busterarm
4/8/2026 at 9:20:57 PM
Agreed, but it's a bit nuanced. I'm working on a fairly complex project now in a domain where I have no technical experience. The first iteration of the project was complete garbage, but it was garbage mainly because I asked for things to be done and never asked HOW it should be done. Result? Complete, utter garbage. It kinda, sorta worked, but again, I would never use it in anything important.Then we went through ~10 complete rewrites based on the learnings from previous attempts. As we went through these iterations, I became much more knowledgeable of the domain - because I saw failure points, I read the resulting code and because I asked the right questions.
Without AI, I would likely have given up after iteration 2, and certainly would not do 10 iterations.
So the nuance here is that iterating and throwing away the entire thing is going to become much cheaper, but not without an engineer being in the loop, asking the right questions.
Note: each iteration went through dual reviews of codex and opus at each phase with every finding fixed and review saying everything is perfect, the best thing on earth.
by gck1
4/9/2026 at 12:01:50 AM
I'm seeing similar process but on large teams still finding this output to be unmaintainable.The problem is that vanishingly few people actually understand the code and are asking the agents to do all of the interpretation and reasoning for them.
This code that you've built is only maintainable for as long as you are still around at the company to work on it -- it's essentially a codebase that you're the only domain expert in. That's not a good outcome for companies either.
My prediction is that the companies that learn this lesson are the ones that are going to stick around. LLMs won't be in wide use for features but for throwaway busy-work type problems that eat lots of human resources and can't be ignored.
by busterarm
4/9/2026 at 12:22:29 AM
I left my last company job just before "AI-first engineering" became mainstream, and you confirmed what I was feeling all this time - I have absolutely zero idea how teams actually manage to collaborate with LLM-managed projects. All the projects that I'm working now are my own and the only reason why I could do this is because I had unlimited time and unlimited freedom. There's no chance I would be able to do this in a team setting.I'm positive that the last company's CEO probably mandates by now that nobody must write a single line of code by hand and there's likely some rigid process everyone has to follow.
Fun times ahead.
by gck1
4/9/2026 at 1:50:39 AM
I agree and commiserate. In the near term my picture is pretty grim. There's fantastic uses for these tools but they're being abused.I was big on correctness, software safety (think medical devices, not memory) and formal proofs anyway, so I think I'm just going to take the pay cut and start selecting for those types of jobs. Your run of the mill SaaS or open source+commercial companies are all becoming a death march.
by busterarm
4/9/2026 at 2:32:12 AM
> Your run of the mill SaaS or open source+commercial companies are all becoming a death march.Most of them already were death marches to begin with, now they are firing squads
by bluefirebrand