3/25/2026 at 5:55:35 PM
What the article doesn't touch on is the vendor lock-in that is currently underway. Many corps are now moving to an AI-based development process that is reliant on the big AI providers.Once the codebase has become fully agentic, i.e., only agents fundamentally understand it and can modify it, the prices will start rising. After all, these loss making AI companies will eventually need to recoup on their investments.
Sure it will be - perhaps - possible to interchange the underlying AI for the development of the codebase but will they be significantly cheaper? Of course, the invisible hand of the market will solve that problem. Something that OPEC has successfully done for the oil market.
Another issue here is once the codebase is agentic and the price for developers falls sufficiently that it will significant cheaper to hire humans again, will these be able to understand the agentic codebase? Is this a one-way transition?
I'm sure the pro-AIs will explain that technology will only get cheaper and better and that fundamentally it ain't an issue. Just like oil prices and the global economy, fundamentally everything is getting better.
by Towaway69
3/25/2026 at 8:03:56 PM
I have similar concerns.We will miss SaaS dearly. I think history is repeating just with DVD and streaming - we simply bought the same movie twice.
AI more and more feels the same. Half a year ago Claude Opus was Anthropics most expensive model - boy, using Claude Opus 4.6 in the 500k version is like paying 1 dollar per minute now. My once decent budgets get hit not after weeks but days (!) now.
And I am not using agents, subagents which would only multiply the costs - for what?
So what we arrive more and more is the same as always: low, medium, luxury tier. A boring service with different quality and payment structures.
Proof: you cannot compensate with prompt engineering anymore. Month ago you fixed any model discrepancies by being more clever and elaborate with your prompts etc.
Not anymore. There is a hidden factor now that accounts for exactly that. It seems that the reliance on skills and different tiers simply moves us away from prompt engineering which is considered more and more jailbreaking than guidance.
Prompt engineering lately became so mundane, I wonder what vendors were really doing by analyzing the usage data. It seems like that vendors tied certain inquiries with certain outcomes modeled by multistep prompting which was reduced internally to certain trigger sentences to create the illusion of having prompted your result while in fact you haven't.
All you did was asking the same result thousands of user did before and the LLM took an statistical approach to deliver the result.
by _the_inflator
3/26/2026 at 8:21:31 AM
Are you saying we increasingly get ML results and not LLM resuluts?by dgb23
3/26/2026 at 2:19:07 PM
> we simply bought the same movie twiceMaybe you did, but I certainly didnt.
by alt227
3/26/2026 at 2:09:52 PM
> 1 dollar per minuteso 60 usd / hour? a plumber earns more
if this allows you to produce features that bring you money, it's a no-brainer
by nubg
3/27/2026 at 2:05:57 AM
Plumbers are working on very standard systems, they have to front costs and secure work. It only works for them because enough people use basic plumbing services to sustain them. But how many have return customers and guaranteed work for long term projects?by kderbyma
3/25/2026 at 7:42:05 PM
No one ever asks how much it costs Facebook or Uber to serve requests because it is irrelevant, they set prices to maximize their profit like any good monopolist. Similarly the future cartel of big providers will charge their captive users whatever they can get away with, not the cost of inference.The current discourse around "AI", swarms of agents producing mountains of inscrutable spaghetti, is a tell that this is the future the big players are looking for. They want to create a captive market of token tokers who have no hope of untangling the mess they made when tokens were cheap without buying even more at full price.
by eaglelamp
3/26/2026 at 12:47:24 PM
> Once the codebase has become fully agentic, i.e., only agents fundamentally understand itWhat exactly do we mean this? Because it is obviously common for human coders to tackle learning how an unfamiliar and complex codebase works so that they can modify it (new hires do it all the time). I can think this means one of two things:
* The code and architecture being produced by agents takes approaches that are abnormally complex or inscrutable to human reviewers. Is that what folks working with cutting edge agents are seeing? In which case, such code obviously isn’t beeping reviewed; it can’t be.
* the code and architecture being produced by agents can still be understood by human reviewers, but it isn’t actually being reviewed by anyone — since reviewing pull requests isn’t always fun or easy, and injecting in-depth human review slows everything down a lot — and so no one understands how the code works. (I keep thinking about the AI maximalist who recently said he woke up to 75 pull requests from his agent, like that was a good thing)
And maybe it’s a combination of the two: agent-generated pull requests are incrementally harder to grok, which makes reviewing more painful and take longer, which means more of them go without in-depth reviews.
But if your claim is true, the bottom line is that it means no one is fully reviewing code produced by agents.
by mojosam
3/26/2026 at 2:37:41 PM
Folks are reviewing the code, but the standard shape of a review is a PR. This diff assumes you have an underlying knowledge of the system, one that is most realistically gained by having written the code. Could you “just remember” every diff you’ve seen? Maybe, but I don’t think it’s realistic; we learn far better from doing than from reading.by nfgrep
3/26/2026 at 2:59:27 PM
> What exactly do we mean this? Because it is obviously common for human coders to tackle learning how an unfamiliar and complex codebase works so that they can modify it (new hires do it all the time).I agree with you, BUT: I find it much harder to get my head around a medium sized vibe coded project than a medium size bespoke coded project. It's not even close.
I don't know what codebases will look like if/when they become "fully agentic". Right now, LLM-agents get worse, not better, as a codebase grows, and as more if it is coded (or worse architected) by LLM.
Humans get better over time in a project and LLMs get worse, and this seems fundamental to the LLM architecture really. The only real way I see for codebases to become fully agentic right now is if they're small enough. That size grows as context sizes that new models can deal with grows.
If that's how this plays out - context windows get large enough that LLM-agents can work fine in perpetuity in medium or large size projects - I wonder if the resulting projects will be extremely difficult for humans to wrap their heads around. That is, if the LLM relies on looking at massive chunks of the codebase all at once, we could get to the point of fully agentic codebases without having to tackle the problem of LLMs being terrible at architecture, because they don't need it.
by furyofantares
3/26/2026 at 5:17:30 PM
And is "model collapse" a thing when LLMs are trained on 100% LLM-generated code? Fun times ahead.by Balooga
3/26/2026 at 8:13:19 PM
What examples in history can be learned from here?by AbstractH24
3/26/2026 at 6:11:02 PM
For your points:- Garden path approaches are definitely a thing, but I don't think this is necessarily catastrophic. A lot depends on the language and framework in question, and also the driver of the change.
- I think it's that plus the fact it's easy to just generate ever more code. Solutions scale in every dimension until they hit a limit where it's not feasible to go further. If AI tools will allow you to write a project with a million or 10 million lines of code, you can bet it will eventually happen. Who's ever gonna fix that?
by 3form
3/25/2026 at 6:46:18 PM
This is a great point, and I routinely use it as an argument for why seasoned professionals should work hard to keep their skills and why new professionals should build them in the first place. I would never be comfortable leasing my ability to perform detailed knowledge work from one of these companies.Sometimes the argument lands, very often it doesn't. As you said, a common refrain is, "but prices won't go up, cost to serve is the highest it will ever be." Or, "inference is already massively profitable and will become more so in the future--I read so on a news site."
And that remark, for me, is unfortunately a discussion-ender. I just haven't ever had a productive conversation with somebody about this after they make these remarks. Somebody saying these things has placed their bets already and are about to throw the dice.
by SaucyWrong
3/26/2026 at 7:02:41 AM
[dead]by AbanoubRodolf
3/26/2026 at 7:39:46 AM
There is no such thing as agentic codebase. If humans don’t understand it, nothing really does. Agents give zero fuck about anything. If they burn 100 or million tokens to add a feature, they don’t care. It’s the developers responsibility to keep it under control.by mdavid626
3/26/2026 at 12:59:31 PM
100% this. With these new tools it's tempting to one-shot massive changesets crossing multiple concerns in preexisting, stable codebases.The key is to keep any changes to code small enough to fit in your own "context window." Exceed that at your own risk. Constantly exceeding your capacity for understanding the changes being made leads to either burnout or indifference to the fires you're inevitably starting.
Be proactive with these tools w.r.t. risk mitigation, not reactive. Don't yolo out unverified shit at scales beyond basic human comprehension limits. Sure, you can now randomly generate entirely (unverified) new software into being, but 95% of the time that's a really, really bad idea. It is just gambling and likely some part of our lizard brains finds it enticing, but in order to prevent the slopification of everything, we need to apply some basic fucking discipline.
As you point out, it's our responsibility as human engineers to manage the risk reward tradeoffs with the output of these new tools. Anecdotally, I can tell you, we're doing a fucking bad job of it rn.
by drzaiusx11
3/26/2026 at 1:19:22 PM
The big AI projects I've seen at work are...- A Kafka topic visualization dashboard
and
- A chrome extension the original "developer" can no longer work on cause the bots will wreck something else on every new feature he tries to add or bug he tries to fix
I think we're a ways out from truly complex code bases that only agents understand.
I've seen a bunch of hype video where people spend lord knows how much money in order to have a bunch of these things run around and I guess... use Facebook, and make reports to distribute amongst themselves, and then the human comes in and spends all their time tweaking this system. And then apparently one day it's going to produce _something_ but two years and counting and much like bitcoin, I've yet to see much of this _something_ materialize in the form of actual, working, quality software that I want to use.
My buddy made a thing that tells him how many people are at the gym by scraping their API and pushing it into a small app package... I guess that's kind of nice.
by codyb
3/26/2026 at 10:27:35 AM
Lately I also wonder about the geopolitical lock-in and balkanization of the internet. US won't have this problem I guess. But with all that's happening in the world right now and the current trends, for the rest of us we need to think hard what AI company we trust with our data or trust to still have access to once we're on the other side of the wall.by shmobot
3/26/2026 at 11:12:15 AM
> geopolitical lock-in and balkanization of the internet. US won't have this problem I guessThis reminds me of the apocryphal headline from the dying days of the British Empire:
> Fog in Channel; Continent Cut Off
by iso1631
3/26/2026 at 6:00:36 PM
If only the AI understands your code, then vendor lock-in and exposure to price hikes will be the least of your problems. I don't think that you will be able to add Claude as the Dev-On-Call to your pagerduty schedule. If you are in an industry that requires due diligence and you get sued for bugs that cause material damage and human suffering, then I don't think the "blame it on Claude" defense is going to land well in court. I cover these topics on https://www.exploravention.com/blogs/soft_arch_agentic_ai/ which is a blog I wrote recently.by gengstrand
3/26/2026 at 2:36:28 PM
I'm beginning to develop the opinion that the next step in this process will (or at least should) be local and/or self-hosted inference.The latest qwen models are already very useful, and the smaller ones can be run locally on my laptop. These are obviously not as good as the latest frontier models, and that's extremely noticeable for the development workflow, but maybe in a year or two, they will be competitive with the proprietary models we have today, which are incredibly capable. I also expect compute for inference to continue getting cheaper.
The current lock in for me is the UX of Claude Code / codex cli, but this is a very small moat that will definitely be commoditized soon.
by sanderjd
3/26/2026 at 10:19:13 AM
I've been thinking this for a while now as well. If they keep subsidizing for long enough there might be a large gap of humans that changes jobs, didn't get into the field in the first place. Then the only way out is to keep buying those tokens.by hyttioaoa
3/25/2026 at 8:17:26 PM
Code is so low entropy that smaller and more economical models will be up to the task the same as gigantic models from big providers are today.No worries there, the huge improvements we see today from GPT and Claude, are at their heart just Reinforcement Learning (CoT, chain of thought and thinking tokens are just one example of many). RL is the cheapest kind of training one can perform, as far as I understand. Please correct me if that's not the case.
In the economy the invisible hand manages to produce everything cheaper and better all the time, but in the digital space the open source invisible hand makes everything completely free.
by emporas
3/25/2026 at 8:28:41 PM
> the open source invisible hand makes everything completely free.In this case the limitation is the compute. Very few people have the compute required for AI/LLMs locally or for free (comparable to the performance of Claude). So yes, there are plenty of Open Source models that can be used locally but you need to invest in hardware to make that happen and especially if you want the quality that is available from the commercial offerings.
Not to speak of the training of those models. It's all there to make it possible to do this locally however where's the hardware? AWS? Google? There are hidden costs of the Open Source model in this case.
by Towaway69
3/25/2026 at 9:04:13 PM
>In this case the limitation is the compute.I agree with most of your points, but computation can be transferred from a place where energy is cheap to a place that is expensive. Energy for cooking cannot be transferred that way.
See for example Amazon-Google datacenters in the Gulf region. We've also got a whole continent, Australia, to put as many solar panels as we desire. Australia got dark for half a day, every day? Put solar panels to the opposite side of the planet.
Energy is a concern, for cooking, transportation etc. Energy for computation is not.
by emporas
3/28/2026 at 6:36:48 AM
> We've also got a whole continent, Australia, to put as many solar panels as we desire. Australia got dark for half a day, every day? Put solar panels to the opposite side of the planet.That is such an incredibly simplistic view and now how anything works.
by sevenseacat
3/27/2026 at 9:28:27 AM
Agree!by actionfromafar
3/26/2026 at 3:02:27 PM
What do you mean about vendor lock-in? I haven’t yet seen any meaningful barriers to switching between different companies’ coding agents. Are you talking about AI market lock-in and not vendor-specific lock-in?> these loss making AI companies will eventually need to recoup
This is true, and while AI spend continues to rise, I’m starting to think once the dust settles and the true costs emerge and stable profits are achieved, that it may be expensive enough that it’s a limiting force.
by dahart
3/26/2026 at 8:15:25 PM
Then you aren’t a true vibe coder using replitby AbstractH24
3/25/2026 at 6:23:30 PM
this is a good point. Some of the ai companies are trying to hook cs students so they'll only know "dev" as a function of their products. First one's free as they say (the drug dealers).by fantasizr
3/25/2026 at 7:02:15 PM
I agree, that is the great danger that CS students aren't even taught the fundamentals of "computer science" any longer. It would be the equivalent of physics students not learning Newtons laws or e-m-c-squared.Probably there is an issue with how much there is in CS - each programming language basically represents a different fundamental approach to coding machines. Each paradigm has its application, even COBOL ;)
Perhaps CS has not - yet - found its fundamental rules and approaches. Unlike other sciences that have hard rules and well trodden approaches - the speed of light is fixed but not the speed of a bit.
by Towaway69
3/26/2026 at 12:38:08 PM
I think it will be more similar to the cloud. I remember people predicted that once you move to the cloud, you'll realize how expensive it actually is, but the cost of migration back will be high. While, yes, the cloud is expensive, most people realized that it is kinda worth it.by shmel
3/26/2026 at 12:57:26 PM
Oil market doesn't have an equivalent of open-source LLMs, self-hosting and cloud providers.by vovavili
3/26/2026 at 10:33:12 AM
"Just like oil prices and the global economy, fundamentally everything is getting better." (implied /s)I remember having to pay a pretty penny to have a 3 minute conversation with my dad working half way across the world. Now I can video call my nephew for 45 minutes without blinking an eye. What happened?
Why will Intelligence be like Oil and not Broadband?
by pj_mukh
3/25/2026 at 7:22:41 PM
> the prices will start rising. After all, these loss making AI companies will eventually need to recoup on their investments.I would bet a lot of money that the price of LLM assistance will go down, not up, as the hardware and software advance.
Every genre-defining startup seems to go through this same cycle where the naysayers tell us that it's all going to collapse once the investment money runs out. This was definitely true for technologies without use cases (remember the blockchain-all-the-things era?) but it is not true for businesses that have actual users.
Some early players may go bust by chasing market share without a real business plan, like the infamous Webvan grocery delivery service. But even Webvan was directionally correct, with delivery services now a booming business sector.
Uber is another good example. We heard for years that ridesharing was a fad that would go away as soon as the VC money ran out. Instead, Uber became a profitable company and almost nobody noticed because the naysayers moved on to something else.
AI is different because the hardware is always getting faster and cheaper to operate. Even if LLM progress stalled at Opus 4.6 levels today, it would still be very useful and it would get cheaper with each passing year as hardware improved.
> I'm sure the pro-AIs will explain that technology will only get cheaper and better and that fundamentally it ain't an issue. Just like oil prices
Comparing compute costs to oil prices is apples to oranges. Oil is a finite resource that comes out of the ground and the technology to extract it doesn't improve much over decades. AI compute gets better and cheaper every year because the technology advances rapidly. GPU servers that were as expensive as cars a few years ago are now deprecated and available for cheap because the new technology is vastly faster. The next generation will be faster still.
If you're mentally comparing this to things like oil, you're not on the right track
by Aurornis
3/25/2026 at 9:31:57 PM
> almost nobody noticedRideshare costs are much higher than they have been in years past. Everyone noticed
by nunez
3/25/2026 at 8:10:46 PM
> Oil is a finite resource that comes out of the groundYes but the chips, hardware, copper cables, silicon and all the rest of the components that make up a server are finite. Unless these magically appear from outer space, we'll face the same resource constraints as everything else that is pulled out of the ground.
These components are also far more fragile to source, see COVID and the collapse of global supply chains. Also the factories to create these components are expensive to build and fragile to maintain. See the Dutch company that seems to be the sole supply of certain manufacturing skills.[1]
> I would bet a lot of money that the price of LLM assistance will go down, not up, as the hardware and software advance.
My bet would be that it would fuel the profits of AI companies and not make the price of AI come down. Over supply makes price come down but if supply is kept artificially low, then prices stay high.
That's the comparison to OPEC and oil. There is plenty of oil to go around yet the supply is capped and thereby prices kept high. There is no guarantee that savings in hardware or supply will be passed on by AI corps.
Indeed there is no guarantee that there will be serious competition in the market, OPEC is a monopoly so why not have an AI monopoly? At the moment, all major players in AI are based in the same geopolitical sphere, making a monopoly more likely, IMHO.
In the end, it's all speculation what will happen. It just depends on which fairy tail one believes in.
by Towaway69
3/25/2026 at 10:23:22 PM
> Yes but the chips, hardware, copper cables, silicon and all the rest of the components that make up a server are finite. Unless these magically appear from outer space, we'll face the same resource constraints as everything else that is pulled out of the ground.Raw material cost is not a driver of datacenter GPU costs.
> Over supply makes price come down but if supply is kept artificially low, then prices stay high.
Where are you getting "supply kept artificially low" when we're in the middle of an explosion of datacenter buildouts and AI companies?
We're in a race to the bottom on pricing. I haven't seen a realistic argument for why you think prices are going to go up. You're starting with a conclusion and trying to find reasons it might be true.
by Aurornis
3/26/2026 at 6:10:10 AM
> Where are you getting "supply kept artificially low"If a resource is controlled by a small group of coordinated folks (for example, large US controlled corporations who have/are these datacenters), the resource may be limited artificially because access to these resources are controlled by said corporations.
Exploding datacenters and AI companies yes, but true competition probably not. Most AI companies are using the datacenters from said corporations, if those corporations decide that compute costs one cent more, then all AI providers will become more expensive.
What we should learn from OPEC and oil is that not resource amounts that define the price, it is access to the resource that defines the price.
by Towaway69
3/25/2026 at 7:38:55 PM
While I fundamentally agree with the basis of compute getting cheaper by the year, I think a missed consideration here is the fact that these models are also requiring exponentially more compute with each iteration to train, in a way that arguably has outscaled the advances in compute.Whether a generalized and broadly usable model will be able to trained within some N multiple of our current compute availability allowing the price to come down with iterative compute advances is yet to be seen. With the current race to the top in terms of SOTA models and increasingly iteratively smaller improvements on previous generations, I have a feeling the scaling need for compute will outpace the improvements in our hardware architecture, and that's if Moore's law even holds as we start to reach the bounds of physics and not engineering.
However as it stands today, essentially none of these providers are profitable so it's really a question of whether that disconnect will come within their current runway or not and they'll be required to increase their price point to stay alive and/or raise more capital. It's pure conjecture either way.
by methodical