alt.hn

2/19/2026 at 1:11:06 PM

The future belongs to those who can refute AI, not just generate with AI

https://learningloom.substack.com/p/the-future-belongs-to-those-who-can

by atomicnature

2/19/2026 at 3:19:48 PM

My entire professional life, I've been dealing with coworkers who just want their code shipped. Overly complex solutions to simple problems, inefficient algorithms, nested spaghetti code. And it was hard to just plainly reject it. I eventually had to become a mentor, teach them how to do it better. I've called myself a programmer even though I mostly just helped other people to program.

Over the years, I've lost the ability to focus on writing code myself, but I've seen a rebirth with the age of coding agents. I find it easier to review their code and they always follow my guidance. In fact, I sometimes feel like around year 2000 when I had "google" skills and the people around me did not.

It's much easier to produce garbage with AI and it will inevitably happen, but engineers who are able to think will simply use it as a new tool.

by lukaslalinsky

2/19/2026 at 3:39:11 PM

> Over the years, I've lost the ability to focus on writing code myself

If this weren't the case how do you think it would affect your usage of coding agents?

by notnullorvoid

2/19/2026 at 4:03:00 PM

I've always been more interested in the computer science and software engineering aspects. I did enjoy writing code occasionally, but overall, I was wishing I had some kind of neural implant to convert my thoughts into code. Coding agents are now good enough that I consider that dream as realized, and with that benefit that I do not actually need any implant in my brain. :)

by lukaslalinsky

2/19/2026 at 5:04:40 PM

Interesting, I've found AI to be further from the goal of converting my thoughts to code than writing code myself.

English is so ambiguous that by the time I provide sufficient context to the prompt it's taken more typing than writing the code would have been. That's not even accounting for the output often needing further prompting to manipulate into what was intended, especially if I try to be light on initial context.

I like it for quick proof of concept stuff though, the details don't matter so much then.

by notnullorvoid

2/19/2026 at 5:26:23 PM

I really approach is the same way as helping my coworkers to be productive. Most of the context is spent on the initial familiarizing with the code and I just double-check that it has the right understanding, there is minimal prompting on my side for this. The next step is to explain the problem I'm trying to solve and and for the simpler ones, it gets what needs to happen 8/10 times. I don't need to be detailed, because it already knows the context. For the complex problems, I split them into small tasks myself and only ask it to do the small steps, small enough to fit into the first category. I feel like the worst outcomes happen when you specify the problem first and let it do it's own research with that in "mind", then it just overthinks and comes up garbage.

by lukaslalinsky

2/19/2026 at 5:30:18 PM

Give it the problem first. Then have it generate the context. Make edits and iterate on the context. Then hit go. Finally, have it write down whatever it needs to for next time.

by peyton

2/19/2026 at 2:14:52 PM

Lately I’ve been trying to develop this discernment skill by filing issues on vibe-coded projects, which requires taking a deep look into them and questioning their premise.

For example, there’s a tensor serialization called Tenso (71 stars) which advertises better cache line alignment after array deserialization. I suspect it’s heavily vibe coded. I read its source code and discovered that unlike what the README claims, the arrays it produces aren’t actually 64-byte aligned after all. https://github.com/Khushiyant/tenso/issues/5 They also have some issues around bounds checking: https://github.com/Khushiyant/tenso/issues/4

Another example: there’s a library called protobuf-to-pydantic (145 stars, 21 forks, 4 dependent packages on PyPI). I wanted to use this for one of my projects. I’m not sure whether it’s vibe coded or not. I found that it creates syntax errors in the generated output code: https://github.com/so1n/protobuf_to_pydantic/issues/121 Seems like a pretty surprising issue to miss.

For Tenso, the code quality is less of an issue than the “should this even be built or is the premise of the work solving the wrong problem,” which I suspect will be a factor in a lot of AI-generated work going forward.

I’m torn. On the one hand, I’m glad to be practicing this skill of “deciding what to rely on and what to refute,” but on the other hand, I certainly don’t feel like I’m being collaborative. These libraries could be their creators’ magnum opus for all I know. My issues just generate work for the project maintainers. I offer failing unit tests, but no fixes.

I earnestly think the future belongs to people who are able to “yes, and” in the right way rather than just tearing others’ work down as I’m currently doing. It’s hard to “yes, and” in a way that’s backed by a discerning skepticism rather than uncritiqueful optimism. Feels like a condradiction somehow.

by gcr

2/19/2026 at 3:15:39 PM

> rather than just tearing others’ work down as I’m currently doing.

Your criticism looks authentic, based on real study and expertise. I think it is a valuable gift. It is only when such a thing become compulsive that it can fairly be called "tearing down."

Looking at your issues, you are calling out real flaws and even provide repro tests. If I were a maintainer who cared, and not just running a slop-for-stars scheme, I'd be very grateful for the reports.

by ericbarrett

2/19/2026 at 3:12:44 PM

We're building AI testing tools at QA.tech and this matches my experience. Great post. The hard part was never generating code. It's figuring out if what came out is actually correct. Our team runs multiple AI agents in parallel writing code and honestly we spend way more time on verification than generation at this point. The ratio keeps getting worse as the models get better at producing plausible-looking stuff.

The codebase growth numbers feel right to me. Even conservative 2x productivity gains break most review processes. We ended up having to build our own internal review bot that checks the AI output because human review just doesn't keep up. But it has to be narrow and specific, not another general model doing vibes-based review.

by while1

2/19/2026 at 3:30:55 PM

> way more time on verification than generation

Was generation a bottle neck previously? My experience has been verification is always the slow part. Often times it’s quicker to do it myself than try to provide the perfect context (via agents.md, skills, etc) to the agent.

The times it’s able to 1 shot things is also code that would take me the shortest amount of time to write.

by sosnsbbs

2/19/2026 at 3:13:41 PM

Examples of refutations would help. I agree with the thesis of learning epistemology but where does one begin? I hazard to suggest Montagovian semantics.

by throwway262515

2/19/2026 at 5:28:40 PM

the article assumes monster codebases because we can build them. but cheaper code also means cheaper rewrites. maybe the future is disposable software, not carefully reviewed software.

by kevincloudsec

2/19/2026 at 5:43:43 PM

If you read the article carefully -- I've dealt with an alternative scenario as well -- where we may have smaller codebases with larger blast radius.

As to disposable software, it's harder to get traction/adaption when things constantly break or are slow or the experience is crappy in general.

To make it simpler - all else being equal - as a user would you prefer using highly reviewed/vetted/reliable software, or otherwise?

My bet is reliability is an invariant -- nobody wishes for software that crashes, leaks your private info, gives faulty output, is laggy to use and so on.

by atomicnature

2/19/2026 at 1:50:11 PM

Why is Karl Popper’s face phasing in and out?

by Kiboneu

2/19/2026 at 2:53:16 PM

Blinking gifs never went out of style, they got slower.

by oidar

2/19/2026 at 2:43:02 PM

These are such a good points. With each new creation of technology, I don't think our comprehension gets better, our ability to collect data gets better. You can now grab so much of it, and analyze it so much, but if you don't know what you're looking at or the value or the Threat within, it's useless.

by Simulacra

2/19/2026 at 2:39:33 PM

[dead]

by paulfdunn