2/15/2026 at 3:25:43 AM
This cognitive debt bit from the linked article by Margaret-Anne Storey at https://margaretstorey.com/blog/2026/02/09/cognitive-debt/ is fantastic:> But by weeks 7 or 8, one team hit a wall. They could no longer make even simple changes without breaking something unexpected. When I met with them, the team initially blamed technical debt: messy code, poor architecture, hurried implementations. But as we dug deeper, the real problem emerged: no one on the team could explain why certain design decisions had been made or how different parts of the system were supposed to work together. The code might have been messy, but the bigger issue was that the theory of the system, their shared understanding, had fragmented or disappeared entirely. They had accumulated cognitive debt faster than technical debt, and it paralyzed them.
by simonw
2/15/2026 at 3:47:56 AM
This was essentially my experience vibe coding a web app. I got great results initially and made it quite far quickly but over time velocity exponentially slowed due to exactly this cognitive debt. Took my time and did a ground up rewrite manually and made way faster progress and a much more stable app.You could argue LLMs let me learn enough about the product I was trying to build that the second rewrite was faster and better informed, and that’s probably true to some degree, but it also was quite a few weeks down the drain.
by appplication
2/15/2026 at 5:36:38 AM
That makes sense, but surely there's a middle ground somewhere between "AI does everything including architecture" and writing everything by hand?by Mavvie
2/15/2026 at 3:46:10 PM
I wonder about that. A general experience in software engineering is that abstractions are always leaky and that details always end up mattering, or at least that it’s very hard to predict which details will end up mattering. So there may not be a threshold below which cognitive debt isn’t an issue.by layer8
2/15/2026 at 5:28:32 PM
> So there may not be a threshold below which cognitive debt isn’t an issue.That's my hunch too.
The problem isn't "I don't understand how the code works", it's "I don't understand what my product does deeply enough to make good decisions about it".
No amount of AI assistance is going to fill that hole. You gotta pay down your cognitive debt and build a robust enough mental model that you can reason about your product.
by simonw
2/15/2026 at 6:28:30 PM
I wouldn’t use the term “product” here. Apart from most software being projects, not products, what I was getting at is that details and design decisions matter at all levels of software. You might have a robust mental model of your product as a product, and about what it does, but that doesn’t mean that you have a good mental model of what’s going on in some sub-sub-sub-module deep within its bowels. Software design has a fractal quality to it, and cognitive debt can accumulate at the ostensibly mundane implementation-detail level as well as at the domain-conceptual level. If you replace “product” by “module”, I would agree.by layer8
2/15/2026 at 6:45:42 PM
I think of that as the law of leaky abstractions - https://www.joelonsoftware.com/2002/11/11/the-law-of-leaky-a... - where the more abstractions between you and how things actually work the more chance there is that something will go wrong at a layer you're not familiar with.I think of cognitive debt as more of a product design challenge - but yeah, it certainly overlaps with abstraction debt.
by simonw
2/15/2026 at 5:50:53 AM
Of course! The original attempt wasn’t really AI doing everything. I was writing much of the code but letting AI drive general patterns since I was unfamiliar with web dev. Now, it’s also not entirely without AI, but I am very much steering the ship and my usage of AI is more “low context chat” than “agentic”. IMO it’s a more functional way to interface with AI for anyone with solid engineering skills.by appplication
2/15/2026 at 5:49:22 AM
I think the sweet spot is to make the initial stuff yourself and then extend or modify somewhat with LLMsit acts as a guide for the LLM too, so it doesn't have to just come up with everything on its own in terms of style or design choices in terms of consistency I'd say?
by r_lee
2/15/2026 at 7:46:27 AM
For more complex projects I find this pattern very helpful. The last two gens of SOTA models have become rather good at following existing code patterns.If you have a solid architecture they can be almost prescient in their ability to modify things. However they're a bit like Taylor series expansions. They only accurate out so far from the known basis. Hmm, or control theory where you have stable and unstable regimes.
by elcritch
2/15/2026 at 8:37:31 AM
I think it's closer to "doing everything by hand" than you'd expect.For me, anyway.
I design as I code, the architecture becomes more obvious as I fill in the detail.
So getting AI to do bits, really means getting AI to do the really easy bits.
by mattmanser
2/15/2026 at 8:47:46 AM
> So getting AI to do bits, really means getting AI to do the really easy bits.As someone who gets quickly bored with repetitive work, this is big though.
by svara
2/15/2026 at 3:36:54 PM
This is essentially the definition for complexity that Ousterhout argues for in the book A Philosophy of Software Design. I highly recommend reading it if you haven’t, it’s very good.by mpbart
2/16/2026 at 11:33:28 AM
Apparently it's controversialhttps://www.reddit.com/r/ExperiencedDevs/comments/1g2l3wc/bo...
by bob_theslob646
2/15/2026 at 9:09:34 AM
I spend half my prompts of making codex explain why and what is he doing. The other 40% is reducing the size of the code base and optimizing. Only 10-sh percent is new development.by ReptileMan