4/6/2026 at 11:37:35 AM
Like almost all of these articles, there's really nothing AI- or LLM-specific here at all. Modularization, microservices, monorepos etc have all been used in the past to help scale up software development for huge teams and complex systems.The only new thing is that small teams using these new tools will run into problems that previously only affected much larger teams. The cadence is faster, sometimes a lot faster, but the architectural problems and solutions are the same.
It seems to me that existing good practices continue to work well. I haven't seen any radically new approaches to software design and development that only work with LLMs and wouldn't work without them. Are there any?
I've seen a few suggestions of using LLMs directly as the app logic, rather than using LLMs to write the code, but that doesn't seem scalable, at least not at current LLM prices, so I'd say it's unproven at best. And it's not really a new idea either; it's always been a classic startup trick to do some stuff manually until you have both the time and the necessity to automate it.
by iainmerrick
4/6/2026 at 3:58:52 PM
It seems entirely logical that if an llm allows each ic to do the work of a 2-3 person team (debatable, but assume it’s true for the sake of argument) then you’ve effectively just added a layer to the org chart, meaning any tool that was effective for the next scale up of org becomes a requirement for managing a smaller team.What should give anyone pause about this notion is that historically, by far the most effective teams have been small teams of experts focusing on their key competencies and points of comparative advantage. Large organizations tend to be slower, more bureaucratic and less effective at executing because of the added weight of communication and disconnect between execution and intent.
If you want to be effective with llms, it seems like there are a lot of lessons to learn about what makes human teams effective before we turn ourselves into an industry filled with clueless middle managers.
by FuckButtons
4/6/2026 at 4:23:03 PM
That's a very good point. Although I think maybe there are some crucial differences with LLMs here.First, the extra speed makes a qualitative difference. There is some communication overhead when you're instructing an LLM rather than just doing the work directly, but the LLM is usually so fast that it doesn't necessarily slow you down.
Second, the lack of ego is a big deal. When reviewing LLM code, I have remind myself that it's okay to ask for sweeping changes, or even completely change the design because I'm not happy with how it turned out. The only cost is extra tokens -- it doesn't take much time, nobody's ego gets bruised, team morale doesn't suffer.
This might be an area where LLMs are able to follow human best practices better than humans themselves can. It's good to explore the design space with throwaway prototypes, but I think people are often too reluctant to throw code away and are tempted to try to reuse it.
by iainmerrick
4/6/2026 at 2:37:00 PM
Maybe just a refinement of "LLM as app logic" but "LLM as emergent workflow" seems powerful.For example, a customer service playbook may have certain ways to handle different user problems, but that breaks down as soon as there are complications or compound issues. But an LLM with the ability to address individual concerns may be able to synthesize solution given fundamental constraints. It's kind of lile building mathematics from axioms.
by lubujackson
4/6/2026 at 12:14:23 PM
[flagged]by tatrions
4/6/2026 at 12:22:31 PM
Why would you use an LLM to format your code?by thfuran
4/6/2026 at 1:04:55 PM
Right. The equivalent in handwritten code would be formatting your code by hand. That used to be the normal way to do it!For handwritten code, the evolution of best practices has tended to be:
- just make your code look neat and tidy;
- follow the style and conventions of existing code;
- follow a strict formatting style guide;
- format automatically using a tool.
I don’t see why it should be any different with LLMs. Why format with an LLM each time, when you can use the LLM once to write the formatter?
Maybe there’s a point at which neural nets replace conventional programming languages for low-level tasks. But I’m skeptical that natural language models will replace programming languages for low-level tasks any time soon.
by iainmerrick
4/6/2026 at 1:47:55 PM
> Why format with an LLM each time, when you can use the LLM once to write the formatter?
the right way to use an llm imo (and same for end-user features as well)... no need to waste tokens and wait time on something that can be done at a fraction of the cost and time (and be deterministic on top of that)
by andrekandre
4/6/2026 at 1:05:24 PM
That had me bogling too. But you know what? A local MoE model roughly equivalent to sonnet mid-2025? Totally possible. Just costs electricity to run, put it in your CI/CD pipeline. Have it apply a bit of intelligence to the thing as well. Uh.... if you've got a spare box, why not?(the fact that said spare box would cost an arm and a leg in 2026 is... a minor detail)
by Kim_Bruning
4/6/2026 at 1:14:50 PM
Why not? Because you can get stronger guarantees of correctness and consistency out of a typical code formatter, which will also probably run about a million times faster.by thfuran
4/6/2026 at 2:20:49 PM
You're not wrong. Though now I am thinking of ways an LLM in CI might be useful, (dijkstra forgive me).by Kim_Bruning