3/8/2026 at 6:42:47 PM
All this demonstrates how non-sticky all this tech really is. When your product is basically just an API call it’s trivial to just swap you out for someone else. As such it’s unclear what the prize at the end of the present race to the bottom is.We swapped OpenAI out for Claude and it required updating about 15 lines of code. All these guys are just commodity to us. If next week there’s a better supplier of commodity AI we’ll spend an hour and swap to something else again. There’s zero loyalty here.
by cmiles8
3/8/2026 at 7:05:45 PM
It's an ironic situation; logically what should be the moat are the models, costing hundreds of millions of investment cost to train and operate so it would make sense if we see different provider focusing in different directions.But right now we have 3-5 top contenders that are so evenly matched that the de-facto sticking point is mostly the harness, ie. the collection of proven plugins/commands/tools/agent features that are tuned to the users personal workflow.
by HotHotLava
3/8/2026 at 7:27:46 PM
> are so evenly matchedIt's because the real value of the models is in what we (humanity) fed them, and all of them have eaten the same thing for free.
by groestl
3/8/2026 at 7:33:33 PM
That's why the frontier LLM companies are now spending a lot more to license exclusive proprietary training data from private sources in order to gain a quality edge in certain business domains.by nradov
3/8/2026 at 8:32:55 PM
But those holding said proprietary data have figured out they’re holding the cards now and have gotten a lot smarter recently. Companies are being very careful about what gets used for inference vs what they allow to be used for training.I don’t see the core models getting dramatically better from where they are now. We’ve clearly hit a plateau.
by cmiles8
3/8/2026 at 8:43:17 PM
Really? I mean I see regularly as I'm coding how much better it could be simply by running obvious prompts for me.When I use the planning mode and then code the success rate is much higher. When I ask it to work on specific isolated chunks of code with clear success/failure modes the success rate is again much higher.
Now imagine a world where it recognizes that from my simple throw away non specific prompt. If it was able to fire off 20 different prompts in quick succession it could easily cut my time spent in front of the screen by a third.
The patterns are obvious but they don't do that right now because it's a lot of compute.
We'll be looking at this time where there's a progress bar showing context space the way we look at the Turbo button.
Because the truth is to get the baseline I'm talking about is a finite amount of compute at a certain point.
by hparadiz
3/8/2026 at 8:26:46 PM
so can it be the one that gets ahead on having people go find things for them - https://news.ycombinator.com/item?id=47285283by bryanrasmussen
3/8/2026 at 9:39:23 PM
Interestingby indigodaddy
3/8/2026 at 7:43:30 PM
That sounds like spin to me. If there were a clear "quality edge" in "certain business domains" stemming from "exclusive proprietary data", someone would have been exploiting it already using meat computers.But no, businesses are dumb. They always have been. Existing businesses get disrupted by new ideas and new technology all the time. This very site is a temple to disruption!
Proprietary advantage is, 99.999% of the time, just structural advantage. You can't compete with Procter & Gamble because they already built their brands and factories and supply chains and you'd have to do all that from scratch while selling cheaper products as upstart value options. And there's not enough money in consumer junk to make that worth it.
But if you did have funding and wanted to beat them on first principles? Would you really start by training an LLM on what they're already doing? No, you'd throw money at a bunch of hackers from YC. Duh.
by ajross
3/8/2026 at 8:03:56 PM
Frontier labs are paying the same constellation of firms offering proprietary data and access to experts in their fields to train LLMs.They are neck-and-neck only because they are participating in the arms race. The only other way to keep up is mass-distillation, which could prove to be fragile (so far it seems to be sustainable).
by linkregister
3/8/2026 at 8:11:54 PM
Meh. I think there's basically no benefit shown so far to careful curation. That's where we've been in machine learning for three decades, after all. Also recognize that the Great Leap Forward of LLMs was when they got big enough to abandon that strategy and just slurp in the Library of All The Junk.I think one needs to at least recognize the possibility that... there just isn't any more data for training. We've done it all. The models we have today have already distilled all of the output of human cleverness throughout history. If there's more data to be had, we need to make it the hard way.
by ajross
3/8/2026 at 8:40:01 PM
Ok, maybe pretraining is now complete and solved. Next up: post-training, reinforcement learning, engineering RL environments for realistic problem solving, recording data online during use, then offline simulation of how it could have gone better and faster, distilling that into the next model etc. etc. There's still decades worth of progress to be made this way.by bonoboTP
3/8/2026 at 11:54:19 PM
" There's still decades worth of progress to be made this way."That's not true. Moreover the progress can slow to a crawl where it's barely noticeable. And in that world the humans continues to stay ahead - that's the magic of humans. To be aware of surroundings and adapt sufficiently whilst taking advantage of tools and leveraging them.
by k32k
3/9/2026 at 3:25:15 PM
This is an interesting theoretical statement that does not survive a collision with reality. The long-tail expert RHLF training is effective. We have seen significant employment impact to call center employees. This does not mean its progress will be cheap or immediate.by linkregister
3/8/2026 at 11:52:35 PM
I think this is where we are at, too.But if you say stuff like this on here you get down voted. Why?
by k32k
3/8/2026 at 8:00:30 PM
The quality edge hasn't shown up yet. If this strategy actually works then the quality improvements will only become apparent in the next round of major LLM updates. There's a lot of valuable training data locked up behind corporate firewalls. But this is all somewhat speculative for now.by nradov
3/8/2026 at 8:39:18 PM
To stop this, I today put most of my Amazon Redshift research web-site behind a basic auth username/password wall.It's all remains free, but you need to email me for a username and password.
If I put in time and effort to make content and OpenAI et al copy it and sell it through their LLM such that no one comes to me any more, then plainly it makes no sense for me to create that content; and then it would not exist for OpenAI to take, or for anyone else. We all lose.
It seems parasitic.
by Max-Ganz-II
3/8/2026 at 10:01:37 PM
An AI is more likely than me to take the time to send you an email for requesting access - I'm too lazy.by bestouff
3/8/2026 at 11:47:07 PM
I think a better approach would be to have a login form and just say "the password is 1234" or whatever.Virtually no scraper has logic to handle that sort of situation, but it's trivial for humans. Way easier than an LLM
by maest
3/9/2026 at 2:16:25 AM
Not true, even Windows Defender is capable of extracting "the password is 1234" from context like emails or webpages.by simulator5g
3/8/2026 at 11:29:23 PM
Please add Internet Archive's bot to your auto-allows, at least. Their bot is presumably well behaved, and for public benefit.by selfhoster11
3/9/2026 at 7:44:49 PM
I'm about to ask IA to remove my content!The reason is that I expect LLM bots to be crawling IA.
by Max-Ganz-II
3/9/2026 at 3:42:49 AM
[dead]by tyzerdako
3/9/2026 at 7:15:37 PM
To be more precise, they all stole the same stuff. I have no empathy for these crooks.by tempodox
3/8/2026 at 10:38:39 PM
Ironic indeed. The Great Replacers of white collar jobs are finding themselves easily replaceable. Delicious.by ryandvm
3/8/2026 at 7:11:13 PM
Cost is never a good moat.by wonnage
3/8/2026 at 7:14:31 PM
the companies migrating off vmware due to broadcom shittiness would disagree with youhttps://arstechnica.com/information-technology/2026/02/most-...
CloudBolt’s survey also examined how respondents are migrating workloads off of VMware. Currently, 36 percent of participants said they migrated 1–24 percent of their environment off of VMware. Another 32 percent said that they have migrated 25–49 percent; 10 percent said that they’ve migrated 50–74 percent of workloads; and 2 percent have migrated 75 percent or more of workloads. Five percent of respondents said that they have not migrated from VMware at all.
Among migrated workloads, 72 percent moved to public cloud infrastructure as a service, followed by Microsoft’s Hyper-V/Azure stack (43 percent of respondents).
Overall, 86 percent of respondents “are actively reducing their VMware footprint,” CloudBolt’s report said.
by dd82
3/8/2026 at 7:49:38 PM
It is easier to do in the cloud than it is to do with actual hardware though, because you'll need enough hardware to do the migration. There is a capital moat around that.I feel like the company that can figure out how to 100% safely live migrate any VMWare workload to another "cheaper" solution, will do quite well.
by latchkey
3/8/2026 at 11:43:11 PM
The moat is compute.In my case, I always use Opus 4.6 in my work, but quite often I get a 504 error, and that's quite annoying. I get errors like that with Gemini too. I can't estimate if I'd get a similar number of errors with ChatGPT, since I use it very infrequently.
But imagine that at some point one of the big 3 (OpenAI, Anthropic, Google) gets very high availability, while the others have very poor availability. Then people would switch to them, even if their models were a bit worse.
Now, OpenAI has been building like crazy, and contracting for future builds like crazy too. Google has very deep pockets, so they'll probably have enough compute to stay in the game. But I fear that Anthropic will not be able to match OpenAI and Google in terms of datacenter build, so it's only a matter of time (and not a lot of time) until they'll be in a pretty tight spot.
by credit_guy
3/8/2026 at 10:08:10 PM
→All these guys are just commodity to us.Just want to note something there:
Okay, premise that AI really is 'intelligent' up to the point of business decisions.
So, this all then implies that 'intelligence' is then a commodity too?
Like, I'm trying to drive at that your's, mine, all of our 'intelligence' is now no longer a trait that I hold, but a thing to be used, at least as far as the economy is concerned.
We did this with muscles and memory previously. We invented writing and so those with really good memories became just like everyone else. Then we did it with muscles and the industrial revolution, and so really strong or endurant people became just like everyone else. Yes, many exceptions here, but they mostly prove the rule, I think.
Now it seems that really smart people we've made AI and so they're going to be like everyone else?
by Balgair
3/8/2026 at 10:17:11 PM
Well as of right now, mathematically and scientifically, the way an LLM works has nothing to do with how the human brain works.by EQmWgw87pw
3/9/2026 at 12:39:50 PM
Neither does a pneumatic piston operate at all like a bicep nor does an accounting book operate at all like a hippocampus. But both have taken well enough of the load off both those tissues that you be crazy to use the biological specimen for 99% of the commercial applications.by Balgair
3/9/2026 at 3:08:40 PM
A bicep and a piston both push and pull things, but an AI cannot do what a smart brain can, so I don’t think being smart will no longer have an advantage. I mean, someone has to prompt the AI after all. The mental ability to understand and direct them will be more important if anything.by EQmWgw87pw
3/9/2026 at 3:27:37 PM
Have you worked with the Claude agents a lot? They essentially prompt themselves! It's crazy.My meaning is not so much that intelligence will go away as a useful trait to individuals. But more that it's utility to the economy will be a commodity, with grades and costs and functions. But again , I'm speculating out of my ass here.
In that, if you want cheap enough intelligence or expensive and good intelligence, you can just trade and sell and buy whatever you want. Really good stuff will be really expensive of course.
Like, you still need to learn to write and have that discipline to use writing in lieu of memory. And you still need to repair and build machines in lieu of muscles and have those skills. Similarly I think that you'll still need the skills to use AI and commoditized intelligence, whatever those are. Empathy maybe?
by Balgair
3/8/2026 at 11:15:32 PM
The way this thing "looks like a duck, swims like a duck, and quacks like a duck" has nothing to do with the way a real duck "looks like a duck, swims like a duck, and quacks like a duck".Who cares, as long as the end results are close (or close enough for the uses they are put to)?
Besides, "has nothing to do with how the human brain works" is an overstatement.
"The term “predictive brain” depicts one of the most relevant concepts in cognitive neuroscience which emphasizes the importance of “looking into the future”, namely prediction, preparation, anticipation, prospection or expectations in various cognitive domains. Analogously, it has been suggested that predictive processing represents one of the fundamental principles of neural computations and that errors of prediction may be crucial for driving neural and cognitive processes as well as behavior."
https://pmc.ncbi.nlm.nih.gov/articles/PMC2904053/
https://maxplanckneuroscience.org/our-brain-is-a-prediction-...
by coldtea
3/9/2026 at 2:07:35 AM
But the end results aren’t actually close. That is why frontier LLMs don’t know you need to drive your car to the car wash (until they are inevitably fine-tuned on this specific failure mode). I don’t think there is much true generalization happening with these models - more a game of whack-a-mole all the way down.by anon373839
3/9/2026 at 12:18:40 AM
The human doesn't just predict. It predicts based upon simulations that it runs. These LLMs do not work like this.by bmitc
3/9/2026 at 7:25:58 AM
If you're able to predict, you're able to simulate.by groestl
3/9/2026 at 12:18:07 AM
So? Does a submarine swim?by _aavaa_
3/8/2026 at 11:10:37 PM
>So, this all then implies that 'intelligence' is then a commodity too? Like, I'm trying to drive at that your's, mine, all of our 'intelligence' is now no longer a trait that I hold, but a thing to be used, at least as far as the economy is concerned.This is obviously already the case with the intelligence level required to produce blog posts and article slop, generade coding agent quality code, do mid-level translations, and things like that...
by coldtea
3/8/2026 at 7:24:34 PM
> someone elseWe have basically 4 companies in the world one can seriously consider, and they all seem to heavily subsidise usage, so under normal market conditions not all of them are going to survive.
by oytis
3/8/2026 at 10:02:04 PM
Open Router shows that commodity api providers have figured out how to do this unsubsidized.The training runs aren’t priced in, but the cost of inference is clearly pretty cheap.
by dghlsakjg
3/8/2026 at 7:04:08 PM
Ya, agreed. This makes me think that (long term) the ai race won’t be won on the merits of individual models, but on pricing — I think Google has a some strong advantages here because they know how to provide cheap compute, and they already have a ton of engineers doing similar things, so it’s a marginal cost for them instead of having to hire and maintain whole devoted teams.by pinkmuffinere
3/8/2026 at 7:27:26 PM
AI consumes entire data centers of compute. You aren’t tucking a few racks into a corner of a data center, you are building entirely new ones. There will be whole devoted teams.by rblatz
3/8/2026 at 7:43:11 PM
But Google already builds data centers. Will there really be devoted AI-datacenter teams? Or will they just expand the normal datacenter teams, and ask them to use GPUs/TPUs instead of CPUs?by pinkmuffinere
3/8/2026 at 7:51:56 PM
> As such it’s unclear what the prize at the end of the present race to the bottom is.It's a market worth many billions so the prize is a slice of that market. Perhaps it is just a commodity, but you can build a big company if you can take a big slice of that commodity e.g. by building a good product (claude code) on top of your commodity model.
by remus
3/8/2026 at 8:35:48 PM
The revenue slice is there, problem is though in a race to the bottom like we’re in now there isn’t much profit at the bottom. And these companies desperately need profit to justify the gigantic capital spend and depreciation title wave that’s on the horizon. There’s no clear way now things don’t just get really pretty quickly.by cmiles8
3/8/2026 at 8:27:04 PM
The entire point of a race to the bottom is that your competitors keep reducing their prices until those billions disappearby adammarples
3/8/2026 at 7:19:36 PM
Unfortunately this is why Anthropic is so aggressive about preventing Claude subscriptions from being used with other tools.by spiffytech
3/8/2026 at 7:51:23 PM
According to this article, they can't even service the amount of paying customers that they have.by latchkey
3/8/2026 at 10:51:05 PM
They should put their prices up then.by tonyedgecombe
3/8/2026 at 7:52:21 PM
Let me explain a possible moat with an example.I have curated my youtube recommendations over the years. It knows my likes and dislikes very well. It knows about me a lot.
The same moat exists in interactions with Claude. Claude remembers so many of preferences. It knows that I work in Python and Pandas and starts writing code for that combination. It knows about what type of person I am and what kind of toys I want my nephews and nieces to play. These "facts" about the person are the moat now. Stackoverflow was a repository of "facts" about what worked and what didn't. Those facts or user chat sessions are now Anthropic's moat.
by ghywertelling
3/8/2026 at 8:36:32 PM
It takes about 30 seconds to export all that into a file and take your history elsewhere. There’s no moat there.by cmiles8
3/8/2026 at 8:00:43 PM
“Hey Claude, write out a markdown file of all of my preferences so any AI agent can pick up where you left off”by dghlsakjg
3/8/2026 at 8:06:49 PM
In fact, here, I'll do it myself.by politelemon
3/8/2026 at 8:37:16 PM
You are missing the correlations that Claude can derive across all these user sessions across all users. In Google analytics, when I visit a page and navigate around till I find what I was looking for or didn't find it, that session data is important for website owners how to optimize. Even in Google search results, when I think on 6th link and not the first, it sends a signal how to rearrange the results next time or even personalize. That same paradigm will be applicable here. This is network effects and personalization and ranking coming togther beautifully. Once Anthropic builds that moat, it will be irreplaceable. If not, ask all users to jump from Whatsapp to Telegram or Signal and see how difficult it is. When anthropic gives you the best answer without asking too much, the experience is 100x better.by ghywertelling
3/8/2026 at 9:59:03 PM
The underlying technology is a thin layer of queryable knowledge/“memories” in between you and the llm, that in turn gets added to the context of your message to the llm. Likely RAG. It can be as simple as a agents.md that you give it permission to modify as needed. I really don’t think that they are correlating your “memories” with other people’s conversations. There is no way for the LLM to know what is or isn’t appropriate to share between sessions, at the moment. That functionality may exist in the future, but if you just export your preferences, it still works.The moat - at this point in time - is really not as deep and wide as you are making it out to be. What you are imagining doesn’t exist yet. Indexing prior conversations is trivially easy at this point, you can do it locally using an api client right this moment.
Besides all that, you will be shocked at how quickly a new service can reconstruct your preferences. I started a new YouTube account, and it was basically the same feed within a few days.
In any case, my feeling is that we should have learned at this point not to keep our data in someone else’s walled garden.
by dghlsakjg
3/9/2026 at 7:37:37 AM
> Besides all that, you will be shocked at how quickly a new service can reconstruct your preferences. I started a new YouTube account, and it was basically the same feed within a few days.Because your location data, wifi name and etc hones in on the fact this is the same person as before. You are actually supporting my point than denying it.
by ghywertelling
3/8/2026 at 7:55:13 PM
You can have Claude write all these out to a file.Then you can feed them into another service.
by RcouF1uZ4gsC
3/8/2026 at 7:36:15 PM
> As such it’s unclear what the prize at the end of the present race to the bottom is.is it ever clear? pretty much everything seems to be a senseless race to bottom.
by timcobb
3/8/2026 at 7:05:55 PM
This is the new web hosting. All the valuations are absurdby seydor
3/8/2026 at 7:28:03 PM
Doesn't "web hosting" print money for Amazon?by gruez
3/8/2026 at 7:54:25 PM
just having strict control over context management in session is a nice differentiator. Shared tooling between desktop and cli and is nice too. they've differentiated enough.by gdilla