3/26/2026 at 3:21:54 AM
> The hackathon winners understood something that most developers do not: the hard part of building useful AI is not the code, it is knowing what the system should do in the first place.This has always been true of all systems. Not that it isn’t an insight though since I don’t think enough people seem to get it. To build a system, with an LLM or without, you must know what the system needs to do. If you define it in C or in a markdown file it must still be defined. The advantage with LLMs is they bridge the gap between system definition and being able to simulate that system on a processor. The definition of the system is still required and it still must be precise. Even with “AGI” that’s still going to be true just as it’s true today with humans who do the translation between those who deeply understand a system and software.
by roxolotl
3/26/2026 at 9:51:17 AM
Agree - domain experts lack the expertise on how things should be built. Developers lack the expertise of what should be built. In each case, one can get into the role of the other, per what you say: "humans who do the translation between those who deeply understand a system and software". LLMs will extrapolate for both (whether it's good or bad)by eithed
3/26/2026 at 4:50:58 PM
The only thing missing from the system is the AI GOV that defines the specification of work. Once that is commonplace developers become ephemeral as the code to support the hardened GOV. That is what CANONIC.org is.by idrdex