4/24/2025 at 6:53:31 PM
I read the article and while it doesn’t say this nor imply it, this is my takeaway, though correct me if I’m wrong:Model innovation is effectively converging and slowing down considerably. The big companies in this space doing the research are not making leap over leap with each release, and the downstream open source projects are coming closer to the same quality or in fact can produce the same quality (e.g DeepSeek or LLAMA) hence why it’s becoming a commodity.
Around the edges model innovation - particularly speed ups in returning accurate results - will help companies differentiate but fundamentally, all this tech is shovels in search of miners, IE you aren’t really going to make money hand over fist by simply being an LLM model provider.
In another words, this latest innovation has hit commodity level within a few short years of going mainstream and the winners are going to be the companies that make products on top of this tech, and as the tech continues to become a commodity, the value proposition for pure research companies drops considerably relative to application builders.
To me this leaves a central question: when does it hit a relative equilibrium where the technology and the applications on top of it have largely hit their maximal ability to add utility to applicable situations? That’s the next question, and I think the far more important one
One other thing, at the end of the article they wrote:
>Ultimately, businesses won’t rearrange themselves around AI — the AI systems will have to meet businesses where they are.
This is demonstrably untrue. CEOs are chomping at the bit to reorganize their business around AI, as in, AI doing things humans used to do and getting the same effective results or better, thereby they can reduce staff across the board while supposedly maintaining the same output or better.
Look at the leaked Shopify memo for an example or the trend of “I can vibe code with an LLM making software engineers obsolete” that has taken off as of late, if LinkedIn is to be believed
by no_wizard
4/24/2025 at 7:37:16 PM
I agree with this but I think it's still an open question if anyone can build a successful product on top of the tech. There will likely be some but it feels eerily similar to the dot com boom (and then bust) when the vast majority of new products built on top of this (internet) technology didn't produce and didn't survive. Most AI products so far are fun toys or interesting proofs, and mediocre when evaluated against other options. They'll need to be applied to a much smaller set of problems (that doesn't support the current level of investment) or find some new miracle set of problems where they change the rules.Businesses are definitely rearranging themselves structurally around AI - at least to try and get the AI valuation multiplier and Executives have levels of FOMO I've never seen before. I report to a CTO and the combination of 100,000 foot hype combined with down in the weeds focus on the "protocol de jour" (with nothing in between that looks like a strategy) is astounding. I just find it exhausting.
by skeeter2020
4/24/2025 at 7:45:53 PM
The dot com boom is an apt analogy: the internet took off, we understood it had potential, but the innovation didn't all come in the first wave. It took time for the internet to bake, and then we saw another boom with the advent of mobile phones, higher bandwidth, and more compute per user.It is still simply too early to tell exactly what the new steady state is, but I can tell you that where we're at _today_ is already a massive paradigm shift from what my day-to-day looked like 3 years ago, at least as a SWE.
There will be lots of things thrown at the wall and the things that stick will have a big impact.
by adpirz
4/24/2025 at 8:21:05 PM
other than constantly feeling gaslit about the quality of these tools, I can tell you where we are _today_ is basically the same in my day to day as it was three years ago.oh except, sometimes someone tells me I could use the bot to generate a thing, and it doesn't work, and I waste some time, and then do it manually.
by dingnuts
4/24/2025 at 7:18:13 PM
I would agree with this and also say that it's been clear this is true for at least a year. Innovations like Deepseek may not have been around a year ago, but it was very clear that "AI" is actually information retrieval and transformation, that the chat UI had limited applicability (nobody wants to "chat with their documents"), and that those who could shape the tech to match use cases would be the ones capturing the value. Just as SaaS uses databases, but creates and captures value by shaping the database to the particular use case.by epistasis
4/24/2025 at 7:23:27 PM
so when do we get to the point where AI apps are just CRUD apps essentially? RAG kinda feels like a better version of those to meby nemomarx
4/24/2025 at 7:24:52 PM
now!by babelfish
4/24/2025 at 7:56:28 PM
> This is demonstrably untrue. CEOs are chomping at the bit to reorganize their business around AI, as in, AI doing things humans used to do and getting the same effective results or better, thereby they can reduce staff across the board while supposedly maintaining the same output or better.Nah. Maybe tech CEOs. Companies are blocking AI carte blanche at the direction of their security teams and/or only allowing an instanced version of MS Copilot, if anything. Other than write emails, it doesn't do much for the average office worker and we all know it.
The value is going to be the apps that build on AI, as you said.
by bongodongobob
4/24/2025 at 8:49:20 PM
It certainly isn't maybe, look at the recent Shopify memo leak, and the way that lots of companies are talking about AI.Any company with any sort of large customer service presence are looking at AI to start replacing alot of customer service roles, for example. There is huge demand for this across many industries, not only tech. Whether it actually delivers is the question, but the demand is there.
by no_wizard
4/24/2025 at 8:55:44 PM
Claiming these AIs "don't do much" overlooks the very real productivity gains already happening – automating tedious tasks and accelerating content creation. This isn't trivial and will lead to the deeper integrations and streamlined (read: downsized) workforces. The reorganization isn't a distant fantasy; it's already here.by warkdarrior
4/25/2025 at 3:42:20 AM
I don't disagree, but your average excel jockey isn't going to build out these automation workflows and likely neither will IT. I'm not saying AI isn't useful. I'm saying the average person doesn't know what to do with it.by bongodongobob
4/24/2025 at 7:58:46 PM
> Companies are blocking AI carte blanche at the direction of their security teamsWhat companies?
by borski
4/24/2025 at 8:42:54 PM
I know many IP-heavy and health-centric companies are blocking AI use severely. For example, pharma depends on huge amounts of secrecy and does not want any data leaked to OpenAI, and often has barely-competent IT and security staff that don't know what "threat model" means. Those who deal with controlled health data also block with a heavy hand.by epistasis
4/24/2025 at 8:53:57 PM
I imagine it'll take time for any of this tech to permeate and the lower barrier of entry will see adoption faster - as is usually the case with new tech - but it'll make its way eventually. On premise AI will be a thingby no_wizard
4/24/2025 at 10:03:58 PM
Once upon a time, they blocked docker too. Things change.by borski
4/24/2025 at 8:07:52 PM
One of the possible alternative routes is this:Model providers and model labs stop opensourcing/listing their innovations/papers and start patenting instead.
by o1inventor
4/24/2025 at 7:37:23 PM
>Model innovation is effectively converging and slowing down considerably. The big companies in this space doing the research are not making leap over leap with each release, and the downstream open source projects are coming closer to the same quality or in fact can produce the same quality (e.g DeepSeek or LLAMA) hence why it’s becoming a commodity.You're just showing how disconnected from the progress of the field you are. o3/o4 aren't even in the same universe as anything from open source. Deepseek R1, LLama 4? Are you joking?
by vonneumannstan
4/24/2025 at 7:47:02 PM
Depends on how they're applied. I've had success using LLama and while we check to see if OpenAI or Google's Gemini would give us any noticeable improvement, it really doesn't for our use case.While certainly newer models are more capable on the whole, it doesn't mean I need all that capability to accomplish the business goal.
by no_wizard
4/24/2025 at 8:59:11 PM
This is kind of a useless statement. If your use case is so easy that "old" models work for it then obviously you won't care about or be following the latest developments but its just not accurate to say that Deepseek R1 is equivalent to o3 or Gemini 2.5.by vonneumannstan
4/24/2025 at 9:13:24 PM
Producing quality results is not the same thing as saying Deepseek R1 is the equivalent of o3 or Gemini 2.5Again, its not about capabilities alone, (on this, many models lag behind, I already said as much). I follow these developments quite closely, and I purposely said results as to not say they're equivalent in capability. They aren't.
However, if a business is getting acceptable results from older models or cheaper models than capability doesn't matter, the results do. Gemini 2.5 can be best of breed but why switch if it shows no meaningful improvement in results for the business?
If I need more capability or results are substandard, I can always upgrade, but its like saying there's no room for cheaper processors and you'd be out of your mind not to be using only the latest at all times no matter the results.
by no_wizard
4/24/2025 at 9:46:10 PM
That's not what GP was saying though. To stay with that analogy, the assertion was that "all processors are kinda the same, there's no real qualitative difference", which sounds pretty strange. It's somewhat accurate if your use-case is covered by the average processor and the faster one doesn't benefit you. They're not equal, but all of them surpass your needs.> If I need more capability or results are substandard, I can always upgrade
You wouldn't be able to upgrade (and see improved results) if the model you use today was close to equal to the top of the line.
by luckylion
4/24/2025 at 10:03:48 PM
That wasn’t the assertion. The results - not the models themselves, not strictly speaking their over all capabilities - if they have no meaningful improvement by moving to a newer model, why then would I want to switch if I’m not getting any tangible improvement in results?by no_wizard
4/25/2025 at 10:51:30 PM
> The big companies in this space doing the research are not making leap over leap with each release, and the downstream open source projects are coming closer to the same quality or in fact can produce the same quality (e.g DeepSeek or LLAMA) hence why it’s becoming a commodity.This was the assertion. "Open source is close/equal in quality", not "open source is enough for plenty of use-cases, not everyone needs the top of the line".
by luckylion
4/24/2025 at 9:26:38 PM
[flagged]by Der_Einzige