12/31/2025 at 12:31:40 AM
AI is going to be a highly-competitive, extremely capital-intensive commodity market that ends up in a race to the bottom competing on cost and efficiency of delivering models that have all reached the same asymptotic performance in the sense of intelligence, reasoning, etc.The simple evidence for this is that everyone who has invested the same resources in AI has produced roughly the same result. OpenAI, Anthropic, Google, Meta, Deepseek, etc. There's no evidence of a technological moat or a competitive advantage in any of these companies.
The conclusion? AI is a world-changing technology, just like the railroads were, and it is going to soon explode in a huge bubble - just like the railroads did. That doesn't mean AI is going to go away, or that it won't change the world - railroads are still here and they did change the world - but from a venture investment perspective, get ready for a massive downturn.
by avalys
12/31/2025 at 6:37:36 AM
Something nobody's talking about: OpenAI's losses might actually be attractive to certain investors from a tax perspective. Microsoft and other corporate investors can potentially use their share of OpenAI's operating losses to offset their own taxable income through partnership tax treatment. It's basically a tax-advantaged way to fund R&D - you get the loss deductions now while retaining upside optionality later. This is why the "cash burn = value destruction" framing misses the mark. For the right investor base, $10B in annual losses at OpenAI could be worth $2-3B in tax shields (depending on their bracket and how the structure works). That completely changes the return calculation. The real question isn't "can OpenAI justify its valuation" but rather "what's the blended tax rate of its investor base?" If you're sitting on a pile of profitable cloud revenue like Microsoft, suddenly OpenAI's burn rate starts looking like a pretty efficient way to minimize your tax bill while getting a free option on the AI leader. This also explains why big tech is so eager to invest at nosebleed valuations. They're not just betting on AI upside, they're getting immediate tax benefits that de-risk the whole thing.by jotras
12/31/2025 at 7:25:16 AM
> For the right investor base, $10B in annual losses at OpenAI could be worth $2-3B in tax shields (depending on their bracket and how the structure works). That completely changes the return calculationI know nothing about finances at this level, so asking like a complete newbie: doesn't that just mean that instead of risking $10B they're risking $7-8B? It is a cheaper bet for sure, but doesn't look to me like a game changer when the range of the bet's outcome goes from 0 to 1000% or more.
by Jare
12/31/2025 at 9:56:42 AM
It all depends on the actual numbers. Consider this simplified example: If you are offered a deal that requires you to lay down 10 billion today and it has a 5% chance to pay out 150 billion tomorrow, your accountants will tell you not to take this deal because your expected return is -2.5 billion. But if you can offset 3 billion in cost to the tax payer, your expected return suddenly becomes $500 million, making it a good deal that you should take every time.by sigmoid10
12/31/2025 at 10:47:36 AM
I get that this example is simplified, but doesn’t the maths here change drastically when the 5% changes by even a few percentage points? The error bars on Openais chance of succes are obviously huge, so why would this be attractive to accountants?by Fraterkes
12/31/2025 at 1:24:04 PM
That's why you have armies of accountants rating stuff like this all day long. I'm sure they could show you a highly detailed risk analysis. You also don't count on any specific deal working, you count on the overall statistics being in your favour. That's literally how venture capital works.by sigmoid10
12/31/2025 at 1:52:00 PM
(I think) I get how venture capital works, my point is that the bullish story for openAI has them literally restructuring the global economy. It seems strange to me that people are making bets with relatively slim profit margins (an average of 500m on a 10b investment in your example) on such volatile and unpredictable events.by Fraterkes
12/31/2025 at 2:07:37 PM
I think you’re right that the critical assumption in that example is the 5pct rather than the tax treatment.by jbs789
12/31/2025 at 9:59:03 PM
AI has a lot lower bar to clear to upend the tech industry compared to the global economy. Not being in on AI is an existential risk for these companies.by sdwr
12/31/2025 at 11:46:13 PM
False.The existential risk is in companies smoking the AI crackpipe that sama (begging your pardon) handed them, thinking it feels great and then projecting[1] that every investment will hit like the first, and continuing to buy the <EXPLETIVE> crack that they can't afford, and they investors can't afford, and their clients can't afford, their vendors can't afford, the grid can't afford, the planet can't afford, the American people can't afford, and sama[2] can't afford, _because it's <EXPLETIVE> crack_!
The wise will shut up and take the win on the slop com bubble.
[1]: https://en.wikipedia.org/wiki/Chasing_the_dragon
[2]: For those following along at home, sama is Sam Altman, he was a part of the Y Combinator community a while back: https://news.ycombinator.com/threads?id=sama
by Cheezmeister
12/31/2025 at 5:12:03 PM
What if your 10B investment encourages others to invest 50B and much of that makes it back to you indirectly via selling more of your core business?I may be way off, but to me it seems like the AI bubble is largely a way to siphon money from institutional investors to the tech industry (and try to get away with it by proxying the investments) based on a volatile and unpredictable promise?
by sakjur
1/1/2026 at 12:16:22 AM
This reminds me of the scene in Margin Call [1] when the analyst discovers that their assumptions for the risk of highly leveraged positions are inaccurate.by willturman
12/31/2025 at 2:52:38 PM
I'm pretty the armies of accountants would have rated it higher if the cashflow was positive than negative. Negative can't be good even while accounting for taxes.by rat9988
12/31/2025 at 9:00:22 PM
Those 150 billion will be taxable at the same (hypothetically 30%) tax rate, reducing the expected return by 45bn * 5% chance. The expected return is still negative; all this bet does is shift tax liabilities in time, which admittedly would matter to some people who subscribe to short-termism.by socialcommenter
1/1/2026 at 11:00:36 AM
I guess to truly calculate it you need to estimate how long it will take to get the ROI (i.e. reach the point where you need to pay taxes on the 150billion). And add back what you can earn by investing the money you didn't have to pay taxes on. I'm not sure what the magnificent 7 can expect as a ROI on invested money though, given that they tend to have enough cash to invest anyways and just pay out dividends.by jaggirs
12/31/2025 at 3:20:27 PM
Thank you, that made perfect sense and in a very simple (simplified but relevant) way. Besides the idea that such risks get aggregated over a portfolio, I can also imagine how the raw numbers flipping from - to + may be useful to paint as acceptable to accounting a bet you want to take anyway for strategic reasons.by Jare
12/31/2025 at 10:21:43 AM
This applies to any spending Microsoft does. What does it have to do with OpenAI?Also, classifying business expenses as "cost to the tax payer" seems less than useful, unless you are a proponent of simply taxing gross receipts. Which has its merits, but then the discussion is about taxing gross receipts versus income with at least some deductible expenses, not anything to do with OpenAI.
by lotsofpulp
12/31/2025 at 11:17:23 AM
If your accountants suggest that you take a single 5% chance deal, they probably skipped maths and statistics and you should fire them.It's the dumb as rocks MBAs that will go head first into the 5% chance deal.
by well_ackshually
12/31/2025 at 12:22:00 PM
I guess the reasoning assumes that you have multiple eggs in your basket. A 95% chance of failure is bad if you're pinning the whole business on it, but if you have a variety of 5% chance deals, then it can make sense to pursue them, which is basically what venture capitalists do.by yossi_peti
12/31/2025 at 2:35:02 PM
The whole thing will crash in the next few years. The economy will have its Wile E Coyote moment, all these "valuations" will evaporate overnight, the Shitcoins will instantly go to zero, the music will stop, and the bagholders will look down to find themselves in possession of their shiny new bag of pet rocks. (A few lumps of coal.) All of these mental flipflops people are using to attempt to justify current insane valuations will be revealed as the evidence of intellectual bankruptcy that they are.by smeeagain2
12/31/2025 at 6:00:15 PM
It’s easy to predict that, but when has “the whole thing” ever crashed?Sure, markets can crash, but this idea of “go to zero” never happens.
That’s the whole calculus of these investments. Many of them are expected to fail.
The fact that it’s painful to the average person means nothing to the people who run the system.
by dangus
12/31/2025 at 6:06:09 PM
> 'this idea of “go to zero” never happens.'Apparently you've never lived through the crash of a 'currency' which is based on a flawed mathematical concept. You'll get to experience what that's like (for the first time in history!) when it's revealed there is some kind of subtle flaw in SHA-256 which renders it worthless as the basis of anything important, let alone a 'currency.'
How do you sell your 'coin' to anyone when the market disappears at the speed of news? Yes, it will go to ZERO, INSTANTLY.
Guess what happens when the crypto crash is combined with a) collapse of the real estate bubble, especially commercial real estate; b) the ongoing IT crash that is only just getting started; c) whatever damage the (current, red-flavored) orangutan in the White House manages to accomplish in his 3+ remaining years of hell; d) fear of looming war; e) economic fallout from COVID which is still ongoing and expanding (hint--many destroyed businesses and people out of work); f) a thousand other icebergs, minefields, and financial hazards confronting us in the near future? Buckle up!
by smeeagain2
12/31/2025 at 6:08:51 PM
I don’t really see what’s relevant about crypto nonsense in this context. We are really talking about the overall economy especially in the tech sector.Even if every cryptocurrency becomes worthless overnight, that doesn’t represent the market going to zero.
I see you’ve edited your comment with more doom and gloom. It’s easy to view everything as a bubble when you’re in a negative mental space.
> a) collapse of the real estate bubble, especially commercial real estate;
Any proof of this bubble? Housing construction continues to lag demand. Offices are largely RTO and Covid-era remote jobs are basically legacy and grandfathered. Every remote employee I know who was laid off in the past couple years had to get a hybrid/in-person job. You can’t just assume 2008 is going to happen again without some real data that shows real estate instability. Where are the poorly qualified borrowers?
> b) the ongoing IT crash that is only just getting started;
That’s one industry of many. One specific industry struggling doesn’t mean much.
> c) whatever damage the (current, red-flavored) orangutan in the White House manages to accomplish in his 3+ remaining years of hell;
Lame duck presidency, he can’t crash shit. Congress will be unfriendly next year and already isn’t even very supportive within his own party.
> d) fear of looming war;
In what universe is any impending war impacting the American economy? You mean the one where defense contractors are hiring and the US is selling weapons to the nations who are doing the fighting?
> e) economic fallout from COVID which is still ongoing and expanding (hint--many destroyed businesses and people out of work);
You are gonna need to explain this one and back this up with some numbers that make sense.
> f) a thousand other icebergs, minefields, and financial hazards confronting us in the near future?
Sounds like internal anxiety demons that are not tangible.
Look, I’m in full agreement that AI will face some kind of correction or crash, but predicting once in a century catastrophe is a losing game.
by dangus
12/31/2025 at 6:14:04 PM
OK.by smeeagain2
12/31/2025 at 6:42:12 PM
> A 95% chance of failure is bad if you're pinning the whole business on it, but if you have a variety of 5% chance deals, then it can make sense to pursue themThis is only true if the probability distributions for the values of the individual deals are rather uncorrelated (or even better: stochastically mostly independent).
by aleph_minus_one
12/31/2025 at 1:27:09 PM
Venture capitalists never take on a single deal. The same way you shouldn't put all your life savings into one stock, even if it has a 90% chance of working out. That's not how any of this stuff works.by sigmoid10
12/31/2025 at 3:45:40 PM
Would you take 5% chance of earning 100 dollars if it would cost 1 dollar?by matkoniecz
12/31/2025 at 4:32:59 PM
You cannot just scale down the numbers and pretend like the world around you doesn't exist. There isn't much I'll do with 1 dollar. There's a shitload Microsoft could do with 10 000 000 000 dollars.Opportunity costs are a thing.
by well_ackshually
12/31/2025 at 6:02:29 PM
There actually becomes a point where there’s not that many things you can do with your money when you have so much of it.This is especially true when your investors/owners expect you to generate better returns than the risk-free rate.
by dangus
12/31/2025 at 2:23:22 PM
The taxes on returning profits to investors via dividends are quite high. You’d be looking at the corporate tax rate (35%) + the dividend tax rates (between 15 and 35%). For any company which may need to raise equity finance later, this is an awful deal - but growing a cash balance doesn’t do the job either.So MSFT is effectively getting 2x the equity by putting money into OpenAI, it also conveys some financial engineering capability as they can choose to invest more when profits are high to smooth out cash flow growth.
by lumost
12/31/2025 at 3:27:00 PM
>The taxes on returning profits to investors via dividends are quite high.isn't that what buybacks are for?
by gruez
12/31/2025 at 1:31:44 PM
Your intuition is exactly correct. An investor with tax to offset can essentially access the same future upside at a discountHowever, this discussion will be a perfect introduction to "finances at this level", where about 60% of the action is injecting more variables until you can fit a veneer of quantification onto any narrative.
by evrydayhustling
12/31/2025 at 10:24:04 AM
That just doesn't sound right. This kind of thought process only works if you think you are guaranteed more than that the next year. It only works in crony capitalism where your friends in government put money in your pockets. It's where we are right now, but definitely not something that is sustainable or something to aspire to.by rjzzleep
12/31/2025 at 3:23:10 PM
If that $7-8 billion is spent on Azure, then it's basically a way to invest in data center capacity while also getting a big piece of Open AI ownership at the same time.Är the same time, MS revenues are looking real good, causing the stock price to go up. It's a win win win maybe win huge situation.
by cjblomqvist
12/31/2025 at 7:20:23 AM
>> For the right investor base, $10B in annual losses at OpenAI could be worth $2-3B in tax shieldsSo just a loss for governments, or in other words, socializing the losses.
by ludicrousdispla
12/31/2025 at 9:53:44 AM
OpenAIs losses are someone else's (taxed) earnings.by nkmnz
12/31/2025 at 7:39:09 AM
Hi, I'm here to hold the bag?by booi
12/31/2025 at 7:42:02 AM
We really should have thought of this before becoming peasants.by Groxx
12/31/2025 at 8:46:39 AM
Have you tried not being poor?by chrishare
12/31/2025 at 9:36:11 AM
It gives you a new opportunity to pull yourself up by the bootstraps. Until mommy and daddy come along with another cash infusion.by nineteen999
12/31/2025 at 9:42:17 AM
You guys are getting bags?by joncrane
12/31/2025 at 3:41:07 PM
Under our eyesby aembleton
12/31/2025 at 7:44:08 AM
Your pension fund, yes.by chinathrow
12/31/2025 at 8:33:44 AM
This comment makes even less sense than jotras’ comment.Pension funds buy shares in businesses such as Microsoft. The money going into the pension fund is not typically a function of the tax paid by companies such as Microsoft, but rather from a combination of actuaries’ recommendations, payroll tax receipts, and politicians’ priorities.
Therefore a pension funds’ equity holdings, such as Microsoft, doing well means taxes can be lower.
by lotsofpulp
12/31/2025 at 12:28:29 PM
If only my country (Germany)’s pension fund was capital/stock based.by jdiez17
12/31/2025 at 1:13:40 PM
Most countries' broadest defined benefit pensions are just simple wealth redistribution schemes from workers to non workers as opposed to being paid from funds that were previously invested.In the USA, Social Security defined benefit pensions are cash from workers today going to non workers today, same as Germany's national scheme (gesetzliche Rentenversicherung?).
The other defined benefit benefit pension schemes are what are usually invested in equities, and the investment restrictions section in this document indicate Germany's "occupational pensions" can also invest in equities. (page 12)
https://www.aba-online.de/application/files/2816/2945/5946/2...
by lotsofpulp
12/31/2025 at 3:29:37 PM
>So just a loss for governments, or in other words, socializing the losses.How's that different than any other sort of R&D incentive? Would you rather that companies return as much money as possible to shareholders, future growth be damned? What about other sorts of tax incentives, which by definition also "just a loss for governments"? Are tax breaks for people with kids also "socializing the losses", given that most households don't have kids?
by gruez
12/31/2025 at 3:45:57 PM
I call it theft when the government steals money from people without kids to enrich parents.by marbro
12/31/2025 at 7:34:48 PM
No, because income equals expenditure. They are one and the same.by missedthecue
12/31/2025 at 9:44:24 AM
Private industry loses 10B. Governments mostly affected as they have less free money to extract.by philipallstar
12/31/2025 at 8:26:22 AM
Amazon already has not been paying any sort of income tax to the EU. There was a lawsuit in Belgium but Amazon has won that in late-2024 since they had a separate agreement in/with Luxembourg.Speaking for EU, all big tech already not paying taxes one way or another, either using Dublin/Ireland (Google, Amazon, Microsoft, Meta, ...) and Luxembourg (Amazon & Microsoft as far as I can tell) to avoid such corporate/income taxes. Simply possible because all the earnings go back to the U.S. entity in terms of "IP rights".
by pvtmert
12/31/2025 at 12:20:26 PM
The EU doesnt collect income/corporate tax, the individual countries do.These big corps use holdings in low tax jurisdisctions like Ireland and Luxemburg, funnel all their EU subsidiaries’ revenues there and end up paying 0 tax in the individual EU countries.
This system is actually legal, EU lawmakers should pass laws to prevent this.
by tacker2000
12/31/2025 at 2:00:52 PM
And let us not forget the millions and billions the global IT corporations pay in the EU in form of social security taxes, income taxes, the jobs they create, and the further millions and billions in the form of purchases from local delivery companies, consultants, DC vendors, office suppliers, taxi companies, delivery companies, food and catering and all the other local EU-based companies who benefit from having these giants walking among us.by abc123abc123
12/31/2025 at 2:42:59 PM
If that's a substitute for corporate taxes, why even have them at all, instead of there needing to be schemes so specific that they have their own Wikipedia articles to describe them [1]? Either you think that corporate taxes should exist, and therefore companies shouldn't just get to opt out of them based on whether they can make a claim to benefiting the economy via trickling down, or you don't, in which case you might as well just state that directly.[1]: https://en.wikipedia.org/wiki/Dutch_Sandwich?wprov=sfla1
by saghm
12/31/2025 at 8:40:26 PM
Those taxes are paid by the individuals, not by the companies.And the decision how to distribute these (corporate tax) should be done by the government. Essentially, companies evading [corporate] tax decides themselves where to distribute that money. Obviously, they make decisions that drives more profits and income, not the public good. Even if it improves living conditions (ie. delivery service would help elderly), it still requires that person to be user of the product. A layman/citizen cannot effectively utilize the benefits.
by pvtmert
12/31/2025 at 11:45:43 PM
A small price to pay for having your democracy subverted by hostile propaganda distributed through social media, your politicians influenced by lobbying, and your smaller businesses killed by giant corporate oligopolies.by TheOtherHobbes
12/31/2025 at 4:25:11 PM
> EU lawmakers should pass laws to prevent this.So EU lawmakers should determine the amount member countries collect in tax?
by rizzom5000
12/31/2025 at 8:49:32 PM
No, but EU should somehow mandate products and services that are built within EU and used within EU or elsewhere, should receive the benefit(s) in terms of taxation.To give an (absurd) example; You work in country X, but the parent company is in country Y. Imagine your income tax is not going to where you reside but where you work, (usually the opposite) in this case, country Y. (~20-40% of the gross salary).
One day, your basic needs (electricity, water, etc) stops working. You call the relevant government department asking what's the problem. They reply with saying they do not know and cannot afford to figure our or fix because they do not have the money to do so.
But you've been paying at least 20% (and up to 46%) of your salary as the income tax. Where the money go? Why do you work here but someone else in the other side of the world getting that slice for free?
by pvtmert
1/1/2026 at 9:04:24 AM
The VAT part does work like that, taxable where the goods are being sold.by jononor
12/31/2025 at 6:06:27 PM
No, actually the EU cant force this, you are of course right.The countries like Ireland and Luxembourg need to stop granting these loopholes.
by tacker2000
12/31/2025 at 2:33:58 PM
I don't really understand this perspective.What should be taxed?
Amazon, as an example, has servers in country X. Country X taxes the transaction or the income from the server company.
Amazon pays delivery drivers in country X to deliver goods, and the driver is taxed through various means (vehicle, fuel, payroll, etc).
What is Amazon doing in country X that should be taxed?
by parineum
12/31/2025 at 8:33:54 PM
Let me explain in-depth; > What should be taxed?
Profits of the company, like all other (local) companies do. > Amazon, as an example, has servers in country X. Country X taxes the transaction or the income from the server company.
Amazon has servers in Germany, Germany is unable to tax the transaction or income from Amazon, because;1. The user completes a transaction either on Amazon (buying a product) or in AWS (running an EC2 instance)
2. If the user is a business, there is no VAT. Because VAT is applied only to the end user. (To prevent compounding effect also). If it was the end-user, then end-user already pays for their VAT, usually around 18-20% in the case of an EC2 instance. But that has nothing to do with Amazon. User technically pays the VAT directly to the government depending on where he/she is located and where the server is.
3. Obviously Amazon does not sell the products or servers for free, they have a markup or profit margin, let's say 40% for a 100eur EC2 instance. So, 40eur lands into Amazon's bank account. While other 60eur goes to the operating expenses. (ie. Electricity, maintenance, employees' salaries, etc...)
4. In this case, Amazon should be taxed from that 40%, (ie. from the 40eur profit). Luxembourg corporate tax is about ~16-17%. Mind that US federal corporate tax is 21% itself. For the sake of simplicity, let's take 20% as the corporate tax. This would make 8eur going into the government's pocket. while 32eur stays as the profit with amazon.
5. All other companies provide the same service and has no magical entity outside pays that ~8eur to the government. Which in turn used to provide services to the citizens. (For example, Luxembourg has completely free public transportation that works 24/7, subsidized by the taxes)
6. However, Amazon having a magical entity, they declare all that 40eur profit belongs to the US entity due to the IP rights. They essentially say 100% of the things that are produced in Luxembourg, by employees in Luxembourg even owned by the US entity. Therefore, they do not pay any income tax as in fact there is no income on paper.
7. Instead, since they were able to save that 8eur, they can reduce the prices of the services up to that amount. But instead, Amazon usually reduces prices about half, reducing 4eur for customers and other 4eur going into Amazon's profit pocket.
8. It all seems nice so far, since users also benefit from reduced prices, right? Unfortunately, no. In the longer term, it hurts competition as other companies must pay taxes and losing the customers [to Amazon].
9. When there is no competition left, then Amazon can just start syphoning all the 8eur profit back to themselves. Even setting the prices as there is no longer any alternative to go.
10. Not only it hurts customers, it also hurts the random person on the street; as they receive services from government which were subsidized by the taxes. You may say that Amazon can or may invest even better products or services, but that's again not helping the layman unless he/she is an Amazon customer. And mind that a citizen does not need to be Amazon customer to get their electricity and water running.
> Amazon pays delivery drivers in country X to deliver goods, and the driver is taxed through various means (vehicle, fuel, payroll, etc).
Similar to the above, Amazon does not pay VAT or any other service taxes for any of the services they provide. But the driver does! It is even worse for the driver when he/she uses Amazon because on the net balance sheet, the driver pays income tax from his/her salary. Pays VAT for the services they receive. If he/she receives 1000eur salary at the end of the month, they can use at most about ~60% of their salary to receive goods and services. (~20% income tax + ~20% VAT). Hence, there is a corporate tax that balance these scenarios. But evading it causes more harm than good in the long run. > What is Amazon doing in country X that should be taxed?
All the profits (earnings, surpluses) they receive should be taxed.
by pvtmert
1/1/2026 at 1:59:06 AM
You can say amazon doesn't pay VAT but it's just as honest as saying Americans don't pay for tariffsThe consumer pays a certain price for a product and a portion of that money goes to taxes and costs to Amazon (including things that are taxed, like driver salaries and fuel). Those taxes are collected on the way from Amazon to the end user in every country they pass through, more or less.
Amazon is creating commerce that is taxed. They aren't skating by for free.
by parineum
12/31/2025 at 3:47:51 PM
> What is Amazon doing in country X that should be taxed?its profits within that country (income minus actual expenses)
in practice expenses are fakely inflated to transfer tax payments top jurisdiction with near-zero taxes
by matkoniecz
12/31/2025 at 4:06:21 PM
How can profit be calculated if a significant component of expenses is intellectual property and brand awareness which was paid for somewhere else?Amazon can add up the costs to install and operate a datacenter or warehouse in country X, but most of the demand for services from that datacenter or warehouse will be due to expenses incurred in country Y.
by lotsofpulp
12/31/2025 at 9:03:56 AM
> Amazon already has not been paying any sort of income tax to the EU.That should be expected, because
https://european-union.europa.eu/priorities-and-actions/acti...
> The EU does not have a direct role in collecting taxes or setting tax rates.
> There was a lawsuit in Belgium but Amazon has won that in late-2024 since they had a separate agreement in/with Luxembourg.
Dec 2023.
> Speaking for EU, all big tech already not paying taxes one way or another, either using Dublin/Ireland (Google, Amazon, Microsoft, Meta, ...) and Luxembourg (Amazon & Microsoft as far as I can tell) to avoid such corporate/income taxes. Simply possible because all the earnings go back to the U.S. entity in terms of "IP rights".
Ireland (due to pressure from EU) closed this in 2020. The amount of tax collected by Ireland quadrupled. See Figure 5 and 6 in link below.
https://budgetmodel.wharton.upenn.edu/issues/2024/10/14/the-...
by lotsofpulp
12/31/2025 at 12:17:12 PM
> any sort of income tax to the EU.Its clear that OP means "in the EU".
> Ireland (due to pressure from EU) closed this in 2020. The amount of tax collected by Ireland quadrupled. See Figure 5 and 6 in link below.
And Ireland fought against this tooth and nail. Yes, a country was fighting to have less income. All out of fear that the companies will leave the little tax heaven. Did they leave? No ...
> See Figure 5 and 6 in link below.
Figure 7 is also interesting if we look at the tax income increase and the outbound.
by benjiro
12/31/2025 at 9:34:15 PM
I find it funny because;1. Amazon reports 250bn$+ revenue for entire EU in 2025. (of course, revenue != profit) while all 250bn$+ evaporates to somewhere. Their own page [1] reports 225k employees across EU, meaning that each employee returns whopping 1 million plus dollars! While being compensated less than 10% of their value!
2. In their own article [1], they boast how they invested (translated; smuggled money out) and enabled SMEs 20bn$+ revenue. (Like seriously, less than 10%?! actually goes back into the economy...)
3. Amazon says that they have invested 250bn$ in EU since 2010. It is completely unknown what or where that was invested. I do not see my street lightning being improved by the Amazon's investment or garbage being collected better.
4. Luxembourg's GDP is ~95bn$ in 2025. Amazon has contributed to that with the 0$ corporate tax. Obviously they employed about 4.5k people which they've decided to let go about 10% of them. Where the median/average yearly gross salary stands somewhere around 80k eur, it is hardly anywhere near 1mm+$ total income. I am guessing that they heat up the offices with burning the remaining cash...
[1]: https://www.aboutamazon.eu/news/job-creation-and-investment/...
For the date of the verdict for Amazon vs EU, apologies. The article date was November 2024 in the source [2].
[2]: https://www.techtimes.com/articles/308509/20241129/amazons-2...
For the Ireland, I only knew similarities between Luxembourg and specific laws allowing such loopholes pre-brexit period. The source is certainly interesting and I need to dive deeper to understand better.
by pvtmert
1/1/2026 at 3:44:19 PM
>1. Amazon reports 250bn$+ revenue for entire EU in 2025. (of course, revenue != profit) while all 250bn$+ evaporates to somewhere. Their own page [1] reports 225k employees across EU, meaning that each employee returns whopping 1 million plus dollars! While being compensated less than 10% of their value!An employee's "value" is not revenue / number of employees. There are many businesses where employees are reduced, but revenue does not decrease in direct proportion.
An employee's compensation is only related to what the employer thinks they can pay someone else and what the employee thinks another employer will pay them. Just like how the price of an apple is related to what the grocery store thinks a customer will pay for it and what the customer thinks a different grocery store will sell it for. (Obviously bounded on both sides by the minimum cost to produce and transport the apple, and by the maximum price a customer is able and willing to pay).
by lotsofpulp
12/31/2025 at 8:47:56 AM
> OpenAI's losses might actually be attractive to certain investors from a tax perspective.OpenAI is anyways seeking Govt Bailout for "National Security" reasons. Wow, I earlier scoffed at "Privatize Profits, Socialize Losses", but this appears to now be Standard Operating Procedure in the U.S.
https://www.citizen.org/news/openais-request-for-massive-gov...
So the U.S. Taxpayer will effectively pay for it. And not just the U.S. Taxpayer - due to USD reserve currency status, increasing U.S. debt is effectively shared by the world. Make billionaires richer, make the middle class poor. Make the poor destitute. Make the destitute dead. (All USAID cuts)
by lenkite
12/31/2025 at 2:38:39 PM
> Make billionaires richer, make the middle class poor. Make the poor destitute. Make the destitute dead. (All USAID cuts)How do you square this thought with the actual rate of poverty being on a steady downward trend while billionaires do their things?
by parineum
12/31/2025 at 5:36:32 PM
Kindly use the Supplemental Poverty Measure (SPM) - which accounts for government benefits (e.g., tax credits, SNAP), taxes, and expenses like medical costs.This does not show your "steady downward trend", but has considerably fluctuated over the last few years. It is an increase to 12.9% in 2024, compared to 7.1% in 2020-21. Will need to wait till end of 2026 for the 2025 computation.
by lenkite
12/31/2025 at 8:40:15 PM
For the world?by parineum
12/31/2025 at 8:43:49 PM
Global poverty reduction has slowed to a near standstill, with 2020–2030 set to be a lost decade - World Bank.https://thedocs.worldbank.org/en/doc/ec3d46c25a822d6d248e86d...
Please note that if you exclude China, the trend of poverty reduction is laughable.
by lenkite
1/1/2026 at 1:50:17 AM
> Global poverty reduction has slowed to a near standstill, with 2020–2030 set to be a lost decade - World Bank."Slowed to a near standstill" means it's still moving in the right direction.
There may have been some global event in the 2020s that maybe had a bit of an impact on the global economy.
> Please note that if you exclude China, the trend of poverty reduction is laughable.
If you exclude the area of the world that used to be extremely poor but has benefitted massively from the wealth generated by creating products for the billionaires abroad, why would you exclude that?
by parineum
12/31/2025 at 9:30:41 AM
There's already a lot that the US taxpayer is on the hook for that's a lot less valuable than a best on the next big thing in software, productivity, and warfare.It shouldn't be the job of the US taxpayer to feed someone that doesn't want to work, study, or pass a drug test, and it absolutely shouldn't be the job of the US taxpayer to feed another country's citizens half a world away.
by alex43578
12/31/2025 at 10:50:44 AM
Hello, I'm British by birth.That's pretty close to the story other Brits give themselves for why losing the empire was actually a good thing for the UK.
by ben_w
12/31/2025 at 4:26:31 PM
[dead]by marbro
12/31/2025 at 11:30:10 AM
> It shouldn't be the job of the US taxpayer to feed someone that doesn't want to work, study, or pass a drug testThis would make sense if every person was given similar opportunities, like providing quality education to all of our youngest and making higher education a mission rather than a business as a starter.
As a society we move at the speed of the weakest among us, we only move forward when we start lifting and helping the weakest and most vulnerable.
You also need to realize that not doing that work is also cause for other taxpayer money to be spent elsewhere, such as spending an average of 37k $ per incarcerated person, and that ignores all the damage that criminal might've caused, all the additional police staffing and personal security that is needed to be spent outside prisons, etc.
Those are complex systems, are you sure it wouldn't be better to spend the same gargantuan amount of money that's spent on millions of inmates and fighting crime into fighting the causes that make many fall into that?
Again, those are complex, but closed systems and the argument of "we shouldn't spend on X" often ignores the cost of not spending on X.
by epolanski
12/31/2025 at 2:25:32 PM
The US already spends 38% more than the OECD average on education per student, just lagging Luxembourg, Austria, Norway, etc - if you’re a student in America, you have access to plenty of resources.You’re right that these are complex systems, and just pouring more tax dollars and more debt into them isn’t working. Portions of our society need to value education, value contributing to society instead of taking, and reject criminality - but those changes require more than blind spending.
by alex43578
12/31/2025 at 4:30:10 PM
Let me phrase it this way for you. The best universities are in the US for a lot of things. But they don't scale.In another way, the top talent gets Ferraris for their tuition, the rest gets a bike. In a lot of European countries everyone can get the Toyota Camry of education, decent but not world class. That does scale though.
Spending isn't everything, it's how you apply that spending.
by hvb2
12/31/2025 at 8:47:14 PM
I very much disagree with your statement.A lot.
As an European I can assure you even public second tier universities have excellent education.
Where they lag the rankings is where money matters: politics to be highly ranked and money for high impact research.
But when it comes to testing proficiency in e.g. science and math, the second university of Rome ranks higher than most ivy leagues in US ;)
by epolanski
12/31/2025 at 8:42:19 PM
That's a meaningless stat in absolute terms, US lags other developed nations as % of GDP spending, and the level of primary and secondary education shows it. US adults lag in cognitive or even reading capabilities.by epolanski
12/31/2025 at 4:58:08 PM
It is true that throwing money at problems is a lazy and ineffective way to address them. American education is very well funded in general, but very poorly executed. There is absolutely no room for arguments about the lack of money where the US is concerned. It is shameful for Americans to make such arguments.Much of the problem comes from a poor grasp of what education is and is for, and because of that, money and effort are not allocated properly. One source of the problem are various educational fads. I personally remember when computers were artificially jammed into school curricula for no good reason. There was absolutely no merit to what was being done. But how much do you think the companies selling that garbage made out?
Or consider the publishing industry that fleeces schools and students with 12978th editions of the same poorly-presented material packaged in overpriced books. Financially, education is quite cheap, but there are sectors of the economy devoted to convincing pedagogues and politicians that it isn't, and that what you need to do is buy in order to "change with the times". Sorry, but basic education isn't fast fashion. Materially, basic education is stable and cheap.
Another problem is that American culture is pragmatic to a fault. Americans have a long history of viewing education, particularly the university, with distaste, as some kind of "European", un-American, and aristocratic thing. This explains the appeal of the pragmatic turn of the university: you now go to university to "get a job". Of course, that isn't the core mission of the university, and most professions don't require anything the university might provide, especially not at these absurd costs (hence why GenZ is seeing something like a 1500% increase in pursuing trades).
We have a cultural momentum that must fizzle out or must be reshaped. Where the modern university specifically is concerned, its days may very well be numbered. It may very well be forced to undergo very painful changes, or crumble, with a new crop of smaller colleges taking their place. Where primary education is concerned, parents are increasingly taking their children out of the savage factory known as public education. This, too, may force public education to finally deal with its dysfunction, or collapse.
by lo_zamoyski
12/31/2025 at 3:29:11 PM
> … into fighting the causes that make many fall into that?A morbid thought that would probably address the bulk of this: male birth control.
The backlash would be profound, it’ll never happen. But if there were a way to make a “perfect pill/shot/procedure” boys had maybe at birth to prevent unplanned pregnancies… just think about it.
I’m not even sure I’m advocating for it. Everyone says “education will fix all the things!” I think raising kids where the parents wanted to be parents would fix a whole lot, at least on the incarnation side.
by irishcoffee
12/31/2025 at 7:16:09 PM
And that wouldn’t be abused? We already means test access to basic necessities; you don’t think “access to producing offspring” wouldn’t be similarly gated?by Ar-Curunir
1/1/2026 at 1:01:16 AM
Wouldn’t it be better for society if it were gated, at least compared to our current system which encourages those least able/suited to have children to have the greatest number of them? If we as a society are uncomfortable with society dictating how/when you have kids, society also shouldn’t be on the hook for providing for them - “no say, no pay”.by alex43578
12/31/2025 at 12:23:32 PM
No that’s the system working as intended: there’s good private money to be made on incarcerating the poor and uneducated!by c1sc0
12/31/2025 at 4:14:57 PM
As intrinsically social animals, we have general obligations toward other people that precede our consent. How these play out in practice will be determined by the limitations and conditions of the situation. But in general, such obligations radiate outward based on proximity of relation.Our first obligations are toward our immediate families. As the human race is essentially a large extended family, the obligations dissipate the further out we go. We do have a general obligation to help those in need, but this obligation is prioritized. In classical texts, this is called the ordo amoris or "order of love" (in the older, more technically accurate terminology, order of charity, where "charity" - from caritas - means willing the good of the other).
Now, to address your comment specifically...
> There's already a lot that the US taxpayer is on the hook for that's a lot less valuable than a best on the next big thing in software, productivity, and warfare.
For example? Whatever the benefits of LLMs, I find this relative exuberance unreasonable.
> It shouldn't be the job of the US taxpayer to feed someone that doesn't want to work,
In someone able-bodied and of sound mind refuses to work, then we don't have an obligation to support someone like that. This is true. In fact, it would be uncharitable to enable their laziness, because it harms the character and virtue of that person. Of course, in practice, if someone you have determined is able to work is found starving and in danger of death, for example, then it is unlikely they are merely lazy. Would a man of sound mind allow himself to starve?
The manner in which we deal with such cases is a prudential matter, not a matter or principle. We need to determine how best to satisfy the principle in the given circumstances, and there is room for debate here.
> it absolutely shouldn't be the job of the US taxpayer to feed another country's citizens half a world away.
If there is a humanitarian crisis somewhere in the world, for example, then there is a general obligation of the entire world to help those affected. How that happens, how that is coordinate, is a matter of prudence and implementation detail, as it were. Naturally, several factors enter the equation (proximity, wealth, etc).
by lo_zamoyski
12/31/2025 at 10:21:45 AM
> It shouldn't be the job of the US taxpayer to feed someone that doesn't want to work, study, or pass a drug testWhat about someone who works and still can’t afford enough housing/food?
> shouldn't be the job of the US taxpayer to feed another country's citizens half a world away.
I mean where’s the profit in that, am i right?
by nielsbot
12/31/2025 at 2:30:54 PM
> What about someone who works and still can’t afford enough housing/food?Food stamps? The original comment is addressing those not working or studying, or staying off drugs.
>I mean where’s the profit in that, am i right?
We’re $38 trillion dollars in debt. Digging a deeper whole isn’t a sound decision.
by alex43578
12/31/2025 at 9:52:21 AM
The modern welfare state is the compromise reached by capitalist democracies to stave off communist revolutions. If you’re going to kill of the welfare part, be ready for the uprising part.by Ar-Curunir
12/31/2025 at 10:22:38 AM
That's where the surveillance and the militarized police force(s) come in. Especially the former now has reached extraordinary levels, given that almost all communication now is easily trackable.Compare that to when we still had revolutions, where it was very hard for government to know what is going on, and to find individuals without a huge effort.
I think revolutions have become next to impossible, unless it is lead by significant parts of the elite that controls at least part of the apparatus.
That's not even counting the far more sophisticated propaganda methods, so that many of the affected people won't even begin to target the actual culprits but are lead to chase shadows, or one another.
by nosianu
12/31/2025 at 12:03:54 PM
We still have revolutions because if enough people go out on the street it doesn’t matter how good your surveillance state is. You can’t kill/arrest 25% of your population. That is why Russia/China/etc are so scared to let any protests begin even with 5 people because if they grow there comes a point it can’t be stopped with violence.by victorbjorklund
12/31/2025 at 2:12:58 PM
You forgot gun control. Is it really a coincidence that the highest concentrations of rich people seem to be the places where citizens have the fewest rights to own guns?by gosub100
12/31/2025 at 2:40:25 PM
If you’re comparing Dubai and Abu Dhabi with New York City, there’s larger variation than, say, China and Brazil.by sigwinch
12/31/2025 at 3:31:36 PM
this is not accurate. microsoft recognizes openai losses on their income statement, proportionate to their ownership stake. this has created a huge drag on eps, along with a lot more eps volatility than in the past. it's gotten so bad that microsoft now points people to adjusted net income, which is notable as they had always avoided those games. none of this has been welcomedby louiereederson
12/31/2025 at 8:25:22 AM
Can you explain it in another way? What you are saying is that instead of loosing 100% they loose 70% and loosing 70% is somehow good? Or are you saying the risk adjusted returns are then 30% better on the downside than previously thought? Because if you are, I think people here are saying the risk is so high that it is a given they will fail.by danielscrubs
12/31/2025 at 2:09:33 PM
Let's say they are paying for "research". The research is very expensive and has a high likelihood of being worthless, but a small likelihood of having value later. So by claiming the financial loss, they can offset the cost of the expensive research by 30%, making it an even more attractive gamble.by gosub100
12/31/2025 at 10:53:35 AM
Whilst that is an option, it wont cover the share price hit from the fallout, which would wipe out more than the debt as when the big domino falls, others will follow as the market panic shifts.So kinda looking at a bank level run on tech companies if they go broke.
by Zenst
12/31/2025 at 11:59:16 AM
> The real question isn't "can OpenAI justify its valuation" but rather "what's the blended tax rate of its investor base?"Was that an organic "it's not A, it's B" or synthetic?
by visarga
12/31/2025 at 7:03:35 PM
> Something nobody's talking aboutNobody is talking about this because it's not a thing.
People here will shit on LLMs all day for being confidently incorrect, then upvote aggressively financially illiterate comments like this.
by rrrrrrrrrrrryan
12/31/2025 at 6:41:38 AM
It’s hardly a free option, by your numbers it’d be a 20-30% discount.by rebuilder
12/31/2025 at 7:19:11 AM
Sure but if there's no moat would you rather pay 100% or 80% until the credits run out? You reap the 100% spend in the meantime. Not everyone even has the no moat discount.by thrwaway55
12/31/2025 at 11:41:34 AM
Lmao this is ridiculous. If MSFT really wanted the tax benefits they should’ve just wholly acquired OAI long ago to acquire the financial synergy you speak of.by sod22
12/31/2025 at 11:14:30 PM
OpenAI is a corporation, so their losses do not flow up to their owners.Their investors, if publicly traded like Microsoft do have to take write-downs on their financial statements but those aren't realized losses for tax purposes. The only tax "benefit" Microsoft might get from the OpenAI investment is writing off the amount it invested if/when OpenAI goes bankrupt.
by gamblor956
12/31/2025 at 12:28:02 PM
None of this is how taxes work.by mbesto
12/31/2025 at 11:17:48 PM
Correct, for tax purposes corporate losses remain with the corporation. Microsoft and the other owners don't get the benefit of OpenAI's losses. At best, they get to write off their investment in OpenAI if the company dissolves, at which point their maximum tax write-off is their capital investment.Note: other people seem to be confused because companies can write off investments in corporate subsidiaries before the subsidiary is dissolved or sold...for book purposes. This creates what is known in the accounting world as a book-tax difference. If you have a few weeks to spare, look up tax provisions...
by gamblor956
12/31/2025 at 1:01:50 AM
There is a pretty big moat for Google: extreme amounts of video data on their existing services and absolutely no dependence on Nvidia and it's 90% margin.by fooblaster
12/31/2025 at 2:41:10 AM
Google has several enviable, if not moats, at least redoubts. TPUs, mass infrastructure and own their own cloud services, they own delivery mechanisms on mobile (Android) and every device (Chrome). And Google and Youtube are still #1 and #2 most visited websites in the world.by simonsarris
12/31/2025 at 3:08:00 AM
Not to mention security. I'd trust Google more not to have a data breach than open AI / whomever. Email accounts are hugely valuable but I haven't seen a Google data breach in the 20+ years I've been using them. This matters because I don't want my chats out there in public.Also integration with other services. I just had Gemini summarize the contents of a Google Drive folder and it was effortless & effective
by xivzgrev
12/31/2025 at 4:17:46 AM
While I don’t disagree with you, for historical purposes I think it’s important to highlight why google started its push for 100% wire encryption everywhere all the time:The NSA and GHCQ and basically every TLA with the ability to tap a fibre cable had figured out the gap in Google’s armour: Google’s datacenter backhaul links were unencrypted. Tap into them, and you get _everything_.
I’ve no idea whether Snowdon’s leaks were a revelation or a confirmation for google themselves; either way, it’s arguably a total breach.
by mootothemax
12/31/2025 at 7:23:39 AM
When I worked at PayPal back in 2003/4, one of the things we did (and I think we were the first) was encrypt the datacenter backhaul connections. This was on top of encrypting all the traffic between machines. It added a lot of expense and overhead, but security was important enough to justify it.by jedberg
12/31/2025 at 9:47:32 AM
And yet Venmo, a Paypal company, publishes transaction data publicly by default, no need to decrypt anything ¯\_(ツ)_/¯by guelo
12/31/2025 at 12:13:41 PM
Venmo publishes raw unencrypted transaction data? Or are you referring to their social network features?by hrimfaxi
12/31/2025 at 3:54:05 PM
where?by matkoniecz
12/31/2025 at 6:41:01 AM
Not that I disagree with your assessment but in the spirit of hn pedantry - google had a very significant breach where gmail was a primary target and that was “only” 16 years ago in mid 2009. So bad that it has its own wikipedia page: https://en.wikipedia.org/wiki/Operation_Auroraby dilyevsky
12/31/2025 at 8:59:40 AM
>very significant breachThat page says it was only 2 accounts and none of the messages within the mail was accessed. I wouldn't call that very significant.
by charcircuit
12/31/2025 at 5:01:55 AM
Is Google even required to inform you of a data breach?by why-o-why
12/31/2025 at 5:29:32 AM
They're subject to California law, so yeah.by bjt
12/31/2025 at 3:00:40 AM
Don't forget the other moat.While their competitors have to deal with actively hostile attempts to stop scraping training data, in Google's case almost everyone bends over backwards to give them easy access.
by devsda
12/31/2025 at 6:13:26 AM
‘Actively hostile’ as in objecting to your content getting ripped off without permission?by catoc
12/31/2025 at 7:11:53 AM
It's a matter of perspective. In this scenario both sides see the other as hostile, just as one would look at a war happening as an outside observer.by satvikpendem
12/31/2025 at 10:08:42 AM
The biggest moat is amount of money. Google has infinite amounts of money the print out of thin air (ads). They don't need complex entangled schemes with circular debts to prop up their operations.by troupo
12/31/2025 at 4:59:10 AM
They also have one of the biggest negatives in that they abandon almost everything they build so it’s hard to get invested in thier products.I agree with the rest though
by DoesntMatter22
12/31/2025 at 7:12:52 AM
They don't abandon their money makers. That's the thing people don't get about the Google graveyard meme, they only cut things that obviously aren't working to make them more money.by satvikpendem
1/1/2026 at 12:28:03 AM
Half of the things they build don't even have a chance to make money. But then people end up depending on their products and they they shut it down or sell it.by DoesntMatter22
12/31/2025 at 1:11:36 AM
I have yet to be convinced the broader population has an appetite for AI produced cinematography or videos. Independence from Nvidia is no more of a liability than dependence on electricity rates; it's not as if it's in Nvidia's interest to see one of its large customers fail. And pretty much any of the other Mag7 companies are capable of developing in-house TPUs + are already independently profitable, so Google isn't alone here.by nateb2022
12/31/2025 at 1:55:52 AM
The value of YouTube for AI isn't making AI videos, it's that it's an incredibly rich source for humanity's current knowledge in one place. All of the tutorials, lectures, news reports, etc. are great for training models.by ralph84
12/31/2025 at 2:04:31 AM
Is that actually a moat? Seems like all model providers managed to scrape the entire textual internet just fine. If video is the next big thing I don’t see why they won’t scrape that too.by Nextgrid
12/31/2025 at 5:57:04 AM
Scraping text across the entire internet is orders of magnitudes easier than scraping YouTube. Even ignoring the sheer volume of data (exabytes), you simply will get blocked at an IP and account level before you make a reasonable dent. Even if you controlled the entire IPv4 space I’m not sure you could scrape all of YouTube without getting every single address banned. IPv6 makes address bans harder, true, but then you’re still left with the problem of actually transferring and then storing that much data.by jmb99
12/31/2025 at 7:35:34 AM
For now, you actually get pretty far with Tor. Just reset your connection when you hit an IP ban by sending SIGHUP to the Tor daemon.I did that when I was retraining Stable Audio for fun and it really turned out to be trivial enough to pull of as a little evening side project.
by earthnail
12/31/2025 at 9:54:44 AM
IPv6 doesn't make it "harder," as they would typically ban whole /48 prefixes.by tucnak
12/31/2025 at 2:09:18 AM
And we're probably already starting to see that, given the semirecent escalations in game of cat and also cat of youtube and the likes of youtube-dl.Reminds me of Reddit's cracking down on API access after realizing that their data was useful. But I'd expect both youtube to be quicker on the gun knowing about AI data collection, and have more time because of the orders of magnitude greater bandwidth required to scrape video.
by monocasa
12/31/2025 at 4:05:14 AM
And reddit turned around and sold it all for a mess of pottage…by jakeydus
12/31/2025 at 7:13:54 AM
Sold being the operative word, rather than giving it away for free.by satvikpendem
12/31/2025 at 6:19:41 PM
Well, it is available for free either way. They pissed off their user base all for a horse that had already left the stable.https://academictorrents.com/details/2d056b22743718ac81915f2...
by monocasa
12/31/2025 at 6:27:09 PM
Look at their stock price. They are doing very well since IPO, and much of it was revenue from selling their data.by satvikpendem
12/31/2025 at 6:35:06 PM
Google's $60m/yr is the only thing keeping them profitable.Mozilla's business model isn't really something to emulate, even if the stock market doesn't really see it that way.
by monocasa
12/31/2025 at 6:41:04 PM
Not really. Lots of companies have valuable data they sell and have been in business for decades just fine. It's even better for reddit because it's user generated so they don't even have to do anything. The users who left during the API debacle are not the vast majority of users which are generally casual and do not give a single shit about what happened, much as tech people like to think otherwise.by satvikpendem
12/31/2025 at 10:40:10 PM
The causal users (to say nothing of the the massive uptick in bot traffic) are some of the more useless data from an AI training perspective.by monocasa
12/31/2025 at 10:43:23 PM
Again, this is a techie take. Lots of people for example use ChatGPT for personal therapy and guess which subs their training data comes from, r/relationships etc. Those trying to use them for other means are comparatively less frequent.by satvikpendem
12/31/2025 at 3:16:22 AM
> Seems like all model providers managed to scrape the entire textual internet just fineGoogle, though, has been doing it for literal decades. That could mean that they have something nobody else (except archive.org) has - a history on how the internet/knowledge has evolved.
by awesome_dude
12/31/2025 at 1:13:57 AM
If you think they are going to catch up with Google's software and hardware ecosystem on their first chip, you may be underestimating how hard this is. Google is on TPU v7. meta has already tried with MTIA v1 and v2. those haven't been deployed at scale for inference.by fooblaster
12/31/2025 at 1:20:44 AM
I don't think many of them will want to, though. I think as long as Nvidia/AMD/other hardware providers offer inference hardware at prices decent enough to not justify building a chip in-house, most companies won't. Some of them will probably experiment, although that will look more like a small team of researchers + a moderate budget rather than a burn-the-ships we're going to use only our own hardware approach.by nateb2022
12/31/2025 at 1:26:44 AM
Well, anthropic just purchased a million TPUs from Google because even with a healthy margin from Google, it's far more cost effective because of Nvidia's insane markup. That speaks for itself. Nvidia will not drop their margin because it will tank their stock price. it's half of the reason for all this circular financing - lowering their effective margin without lowering it on paper.by fooblaster
12/31/2025 at 3:46:49 AM
And, don't forget everyone's buying from TSMC in every case!by fragmede
12/31/2025 at 1:33:07 AM
It's in Nvidia's interest to charge the absolute maximum they can without their customers failing. Every dollar of Nvidia's margin is your own lost margin. Utilities don't do that. Nvidia is objectively a way bigger liability than electricity rates.by margalabargala
12/31/2025 at 1:57:44 AM
it is in every business’s best interest to charge the maximum…by bdangubic
12/31/2025 at 3:39:16 AM
Utilities and insurance companies are two examples of business regulated to not charge the maximum, for public policy reasons.by wrs
12/31/2025 at 4:15:26 AM
we suggesting that nvidia/google/.. be regulated for like utilities?by bdangubic
12/31/2025 at 4:13:19 PM
Just expanding on the above sentence “utilities don’t do that”. Which is why depending on Nvidia isn’t like depending on electricity.by wrs
12/31/2025 at 4:48:02 AM
[flagged]by margalabargala
12/31/2025 at 8:14:16 AM
Not GP and haven’t participated in this thread. I’m clueless on what the point in your earlier comment is. Can you elaborate, please?by AnonHP
12/31/2025 at 4:04:12 PM
I can try. Which part is confusing to you, the "nvidia will charge as much as it can" part or the "utilities won't" part?by margalabargala
12/31/2025 at 5:15:46 AM
I think it will be accepted by broader population. But if generation is easy and cheap I wonder if there is demand. And I mean as total demand in the segment. Will there be enough impressions to go around to actually profit from the content. Especially if storage is also considered.by Ekaros
12/31/2025 at 1:20:56 AM
The video data is probably good for training models, including text models.by Seattle3503
12/31/2025 at 5:06:26 AM
Given the fact that Apple and Coke but rushed to produce AI slop, and the agreements with Disney, we are going to see a metric fuck-ton of AI-generated cinema in the next decade. The broader population's tastes are absolute harbage when it comes to cinema, so I don't see why you need convincing. 40+ superhero films should be enough.by why-o-why
12/31/2025 at 4:19:17 AM
On paper, Google should never have allowed the ChatGPT moment to happen ; how did a then non-profit create what was basically a better search engine than Google?Google suffers from classic Innovator's Dilemma and need competition to refocus on what ought to be basic survival instincts. What is worse is the search users are not the customers. The customers of Google Search are the advertisers and they will always prioritise the needs of the customers and squander their moats as soon as the threat is gone.
by cdf
12/31/2025 at 7:48:41 AM
Google allowed this to happen because they listened to their compliance department and were afraid of a backslash if LLM says something that could anger people.Sergey Brin interview: https://x.com/slow_developer/status/1999876970562166968?s=20
This attitude also partially explains the black vikings incident.
by miohtama
12/31/2025 at 4:29:13 AM
Exactly, Google's business isn't search, it's ads. Is ChatGPT a more profitable system for delivering ads? That doesn't appear so, which means there's really no reason for Google to have created it first.by hattmall
12/31/2025 at 4:37:41 AM
There was a very negative "immune" response from the users when they perceived suggestions from ChatGPT as ads.This will be hard for them to integrate in a way that won't annoy users / will be better implemented than any other competitor in the same space.
Or perhaps we just deal with all AI across the board serving us ads.... this makes more sense unfortunately.
by razodactyl
12/31/2025 at 5:35:06 AM
There’s a very negative immune response to the idea of Netflix running ads.And yet they’re there, in the form of prominent product placement in all of their original series along with strategic placement in the frame to make sure they appear in cropped clips posted to social media and made into gifs.
Stranger Things alone has had 100-200 brands show up under the warm guise of nostalgia, with Coke alone putting up millions for all the less-than-subtle screen time their products get.
I’m certain AI providers will figure out how to slyly put the highest bidder into a certain proportion of output without necessarily acting out that scene in Wayne’s World.
by transcriptase
12/31/2025 at 5:37:46 AM
I suspect google can last much longer in regards to an AI model chat engine that competes with open AI and other companies, without needing a profit from that particular product in a timely manner. I can's say the same for the others. Google is using it's own money to fund this without mch pressure for immediate profit in a time deadline. They can rely on their other services for revenue and profit for the meantime.by mahirsaid
12/31/2025 at 1:59:26 PM
Google had an in-house chatbot that was never allowed to launch. I used to think that they were wrong but now I'm pretty sure they were right to not launch it. Users are very forgiving with a newcomer but not with an established company.by gniv
12/31/2025 at 4:35:45 AM
Think about it in terms of the research they put out into the ether though. The research grows into something viable, they sit back and watch the response and move when it makes sense.It's like that old concept of saying something wrong in a forum on purpose to have everyone flame you for being wrong and needing to prove themselves better by each writing more elaborate answers.
You catch more fish with bait.
by razodactyl
12/31/2025 at 1:09:09 AM
And yes, all their competitors are making custom chips. Google is on TPU v7. absolutely nobody is going to get this right on the first try among their competitors - Google didn't.by fooblaster
12/31/2025 at 1:27:29 AM
Bigger problem for late starts now is that it will be hard to match the performance and cost of Google/Nvidia. It's an investment that had to have started years ago to be competitive now.by CharlieDigital
12/31/2025 at 2:17:03 AM
In this case the difference between its and it’s does alter the meaning of the sentence.by loloquwowndueo
12/31/2025 at 3:41:36 AM
Agreed. Even xAI's (Grok's) access to live data on x.com and millions of live video inputs from Tesla is a moat not enjoyed by OpenAI.by stevenjgarner
12/31/2025 at 10:18:22 AM
>Agreed. Even xAI's (Grok's) access to live data on x.com and millions of live video inputs from Tesla is a moat not enjoyed by OpenAI.Tesla does not have live video feed from (every) Tesla car.
by chroma205
12/31/2025 at 6:20:57 PM
That live data for X is mostly gonna consist of brainrot and egotistical founders and rising vibe coders, that data has to be worth so much less than something like Reddit.by johnnyfived
12/31/2025 at 5:16:41 AM
The TAM for video generation isn't as big as the other use cases.by choudharism
12/31/2025 at 5:52:58 AM
I agree, but isn't the TAM for video generation all of movies, TV, and possibly video games, or all entertainment? That's a pretty big market.by xnx
12/31/2025 at 6:48:29 AM
What you’re competing for is people’s attention and the tam for that is biggest there isby dilyevsky
12/31/2025 at 5:18:08 AM
YT is also a giant corpus of English via the transcriptionby lokar
12/31/2025 at 6:25:00 AM
Hasn't it all been scraped by other ai companies already?by IncreasePosts
12/31/2025 at 1:58:58 AM
Your premise is wrong in a very important way.The cost of entry is far beyond extraordinary. You're acting like anybody can gain entry, when the exact opposite is the case. The door is closing right now. Just try to compete with OpenAI, let's see you calculate the price of attempting it. Scale it to 300, 500, 800 million users.
Why aren't there a dozen more Anthropics, given the valuation in question (and potential IPO)? Because it'll cost you tens of billions of dollars just to try to keep up. Nobody will give you that money. You can't get the GPUs, you can't get the engineers, you can't get the dollars, you can't build the datacenters. Hell, you can't even get the RAM these days, nor can you afford it.
Google & Co are capturing the market and will monetize it with advertising. They will generate trillions of dollars in revenue over the coming 10-15 years by doing so.
The barrier to entry is the same one that exists in search: it'll cost you well over one hundred billion dollars to try to be in the game at the level that Gemini will be at circa 2026-2027, for just five years.
Please, inform me of where you plan to get that one hundred billion dollars just to try to keep up. Even Anthropic is going to struggle to stay in the competition when the music (funding bubble) stops.
There are maybe a dozen or so companies in existence that can realistically try to compete with the likes of Gemini or GPT.
by adventured
12/31/2025 at 2:06:49 AM
> Just try to compete with OpenAI, let's see you calculate the price of attempting it. Scale it to 300, 500, 800 million users.Apparently the DeepSeek folks managed that feat. Even with the high initial barriers to entry you're talking about, there will always be ways to compete by specializing in some underserved niche and growing from there. Competition seems to be alive and well.
by zozbot234
12/31/2025 at 2:25:10 AM
DeepSeek certainly managed that on the training side but in terms of inference, the actual product was unusably slow and unreliable at launch and for several months after. I have not bothered revisiting it.by thom
12/31/2025 at 4:44:42 AM
Are you talking about the model or their service? There's plenty of options for using their models other than the official DeepSeek API.by snuxoll
12/31/2025 at 1:07:28 AM
> AI is going to be a highly-competitive, extremely capital-intensive commodity marketIt already is. In terms of competition, I don't think we've seen any groundbreaking new research or architecture since the introduction of inference time compute ("thinking") in late 2024/early 2025 circa GPT-o4.
The majority of the cost/innovation now is training this 1-2 year old technology on increasingly large amounts of content, and developing more hardware capable of running these larger models at more scale. I think it's fair to say the majority of capital is now being dumped into hardware, whether that's HBM and research related to that, or increasingly powerful GPUs and TPUs.
But these components are applicable to a lot of other places other than AI, and I think we'll probably stumble across some manufacturing techniques or physics discoveries that will have a positive impact on other industries.
> that ends up in a race to the bottom competing on cost and efficiency of delivering
One could say that the introduction of the personal computer became a "race to the bottom." But it was only the start of the dot-com bubble era, a bubble that brought about a lot of beneficial market expansion.
> models that have all reached the same asymptotic performance in the sense of intelligence, reasoning, etc.
I definitely agree with the asymptotic performance. But I think the more exciting fact is that we can probably expect LLMs to get a LOT cheaper in the next few years as the current investments in hardware begin to pay off, and I think it's safe to assume that in 5-10 years, most entry-level laptops will be able to manage a local 30B sized model while still being capable of multitasking. As it gets cheaper, more applications for it become more practical.
---
Regarding OpenAI, I think it definitely stands in a somewhat precarious spot, since basically the majority of its valuation is justified by nothing less than expectations of future profit. Unlike Google, which was profitable before the introduction of Gemini, AI startups need to establish profitability still. I think although initial expectations were for B2C models for these AI companies, most of the ones that survive will do so by pivoting to a B2B structure. I think it's fair to say that most businesses are more inclined to spend money chasing AI than individuals, and that'll lead to an increase in AI consulting type firms.
by nateb2022
12/31/2025 at 1:38:57 AM
> in 5-10 years, most entry-level laptops will be able to manage a local 30B sized modelI suspect most of the excitement and value will be on edge devices. Models sized 1.7B to 30B have improved incredibly in capability in just the last few months and are unrecognizably better than a year ago. With improved science, new efficiency hacks, and new ideas, I can’t even imagine what a 30B model with effective tooling available could do in a personal device in two years time.
by mark_l_watson
12/31/2025 at 2:41:24 AM
Very interested in this! I'm mainly a ChatGPT user; for me, o3 was the first sign of true "intelligence" (not 'sentience' or anything like that, just actual, genuine usefulness). Are these models at that level yet? Or are they o1? Still GPT4 level?by sigbottle
12/31/2025 at 3:13:27 AM
Not nearly o3 level. Much better than GPT4, though! For instance Qwen 3 30b-a3b 2507 Reasoning gets 46 vs GPT 4's 21 and o3's 60-something on Artificial Analysis's benchmark aggregation score. Small local models ~30b params and below tend to benchmark far better than they actually work, too.by logicprog
12/31/2025 at 3:13:51 PM
I think having massive amounts of high-bandwidth memory on consumer grade hardware could become a reality via flash.How Flash in SSDs works is you have tens to hundreds of dies stacked on top of each other in the same package, and their outputs are multiplexed so that only one of them can talk at the same time.
We do it like this because we still can get 1-2 GB/s out of a chip this way, and having the ability to read hundreds of times faster is not justified for storage use.
But if we connected these chips to high speed transcievers, we could get out all the 100s of GBs of bandwidth at the same time.
I'm probably oversimplifying things, and it's not that simple IRL, but I'm sure people are already working on this (I didn't come up with the idea), and it might end up working out and turn into a commercial product.
by torginus
12/31/2025 at 1:43:44 AM
> One could say that the introduction of the personal computer became a "race to the bottom." But it was only the start of the dot-com bubble era, a bubble that brought about a lot of beneficial market expansion.I think the comparison is only half valid since personal computers were really just a continuation of the innovation that was general purpose computing.
I don't think LLMs have quite as much mileage to offer, so to continue growing, "AI" will need at least a couple step changes in architecture and compute.
by airstrike
12/31/2025 at 1:59:06 AM
I don't think anyone knows for sure how much mileage/scalability LLMs have. Given what we do know, I suspect if you can afford to spend more compute on even longer training runs, you can still get much better results compared to SOTA, even for "simple" domains like text/language.by zozbot234
12/31/2025 at 3:10:57 AM
I think we're pretty much out of "spend more compute on even longer training runs" atp.by airstrike
12/31/2025 at 7:18:23 PM
I haven't read much about it to understand what's going on, but the development of multi-modal models has also felt like a major step. Being able to paste an image into a chat and have it "understand" the image to a comparable extent to language is very powerful.by snet0
12/31/2025 at 12:26:36 PM
> I don't think we've seen any groundbreaking new research or architecture since the introduction of inference time compute ("thinking") in late 2024/early 2025 circa GPT-o4It was model improvements, followed by inference time improvements, and now it's RLVR dataset generation driving the wheel.
by visarga
12/31/2025 at 5:46:44 AM
> But I think the more exciting fact is that we can probably expect LLMs to get a LOT cheaper in the next few years as the current investments in hardware begin to pay offCitation needed!
by skort
12/31/2025 at 3:25:57 AM
Google’s moat:Try “@gmail” in Gemini
Google’s surface area to apply AI is larger than any other company’s. And they have arguably the best multimodal model and indisputably the best flash model?
by jfrbfbreudh
12/31/2025 at 3:43:24 AM
If the “moat” is not AI technology itself but merely sufficient other lines of business to deploy it well, then that’s further evidence that venture investments in AI startups will yield very poor returns.by avalys
12/31/2025 at 8:11:36 AM
It's funny that a decade ago the exit strategy of many of these startups would have been to get acquired by MSFT / META / GOOG. Now, the regulators have made a lot of these acquisitions effectively impossible for antitrust reasons.Is it better for society for promising startups to die on the open market, or get acquired by a monopoly? The third option -- taking down the established players -- appears increasingly unlikely.
by tjwebbnorfolk
12/31/2025 at 9:44:04 AM
> Now, the regulators have made a lot of these acquisitions effectively impossible for antitrust reasons.Is there any evidence that this is the case ? For very big merger (like nvdia and Arm tried) sure, but I can't think of a single time regulator stop a big player from buying a start up.
by maeln
1/1/2026 at 9:06:11 AM
I'm sure you realize you're asking me to prove a negative? I don't have the ability to prove to you that something didn't happen or why.What I know is that a lot of deals aren't even being considered that once were, and antitrust is a huge factor in that consideration.
by tjwebbnorfolk
12/31/2025 at 6:33:26 AM
Try “@gmail” in GeminiI think this is a problem for Google. Most users aren't going to do that unless they're told it's possible. 99% of users are working to a mental model of AI that they learned when they first encountered ChatGPT - the idea that AI is a separate app, that they can talk to and prompt to get outputs, and that's it. They're probably starting to learn that they can select models, and use different modes, but the idea of connecting to other apps isn't something they've grokked yet (and they won't until it's very obvious).
What people see as the featureset of AI is what OpenAI is delivering, not Google. Google are going to struggle to leverage their position as custodians of everyone's data if they can't get users to break out of that way of thinking. And honestly, right now, Google are delivering lots of disparate AI interfaces (Gemini, Opal, Nano Banana, etc) which isn't really teaching users that it's all just facets of the same system.
by onion2k
12/31/2025 at 10:11:58 AM
> I think this is a problem for Google. Most users aren't going to do that unless they're told it's possible.Google is telling this in about a hundred different popups and inline hints when you use any of its products
by troupo
12/31/2025 at 10:49:58 AM
I've use the Gemini app on my phone a fair bit recently and I've not seen it. That said, I don't think I've seen any popups either. Maybe I've blocked them...by onion2k
12/31/2025 at 10:58:12 AM
Users are trained to close those without reading.by mikkupikku
12/31/2025 at 10:13:45 PM
Gemini can't even create a Google Doc if you prompt it to.by dbbk
12/31/2025 at 4:04:21 AM
That kind of makes it sound like AI is a feature and not a product, which supports avalys' point.by edaemon
12/31/2025 at 4:10:50 AM
Also, Google doesn't have to finance Gemini using venture capital or debt, it can use its own money.by dartharva
12/31/2025 at 4:41:02 AM
DeepMind also solved the protein folding problem, so they have that going for them.by latentsea
12/31/2025 at 2:36:42 PM
Did it really? That is actually huge if so.by johnisgood
12/31/2025 at 4:47:43 AM
I tried it, but nothing happened. It said that it sent an email but didn't. What is supposed to happen?by venusenvy47
12/31/2025 at 4:56:54 PM
> AI is a world-changing technologyAs stated in TFA, this simply has not been demonstrated , nor are there any artifacts of proof. It's reasonable to suspect that there is no special apparatus behind the curtain in this Oz.
From TFA: "One vc [sic] says discussion of cash burn is taboo at the firm, even though leaked figures suggest it will incinerate more than $115bn by 2030."
by heresie-dabord
12/31/2025 at 5:39:41 AM
The railroads provided something of enduring value. They did something materially better than previous competitors (horsecarts and canals) could. Even today, nothing beats freight rail for efficient, cheap modest-speed movement of goods.If we consider "AI" to be the current LLM and ImageGen bubble, I'm not sure we can say that.
We were all wowed that we could write a brief prompt and get 5,000 lines of React code or an anatomically questionable deepfake of Legally Distinct Chris Hemsworth dancing in a tutu. But once we got past the initial wow, we had to look at the finished product and it's usually not that great. AI as a research tool will spit back complete garbage with a straight face. AI images/video require a lot of manual cleanup to hold up to anything but the most transient scrutiny. AI text has such distinct tones that it's become a joke. AI code isn't better than good human-developed code and is prone to its own unique fault patterns.
It can deliver a lot of mediocrity in a hurry, but how much of that do we really need? I'd hope some of the post-bubble reckoning comes in the form of "if we don't have AI to do it (vendor failures or pricing-to-actual-cost makes it unaffordable), did we really need it in the first place?" I don't need 25 chatbots summarizing things I already read or pleading to "help with my writing" when I know what I want to say.
by hakfoo
12/31/2025 at 8:41:22 AM
You're absolutely correct! ( ;) )The issue is that generation of error-prone content is indeed not very valuable. It can be useful in software engineering, but I'd put it way below the infamous 10x increase in productivity.
Summarizing stuff is probably useful, too, but its usefulness depends on you sitting between many different communication channels and being constantly swamped in input. (Is that why CEOs love it?)
Generally, LLMs are great translators with a (very) lossly compressed knowledge DB attached. I think they're great user Interfaces, and they can help streamline buerocracy (instead of getting rid of it) but they will not help getting down the cost of production of tangible items. They won't solve housing.
My best bet is in medicine. Here, all the areas that LLMs excell at meet. A slightly distopian future cuts the expensive personal doctors and replaces them with (few) nurses and many devices and medicine controlled by a medical agent.
by choeger
12/31/2025 at 4:48:52 PM
Yes, exactly -- AI would only be analogous to railroads if passenger trains took you to the wrong location roughly 50% of the time.by re_chief
12/31/2025 at 8:35:35 AM
I was really hoping, and with a different administration I think there was a real shot, for a huge influx of cash into clean energy infrastructure.Imagine a trillion dollars (frankly it might be more, we'll see) shoved into clean energy generation and huge upgrades to our distribution.
With a bubble burst all we'd be left with is a modern grid and so much clean energy we could accelerate our move off fossil fuels.
Plus a lot of extra compute, that's less clear of a long term value.
Alas.
by cco
12/31/2025 at 8:03:45 AM
Anthropic is building moat around theirs models with claude code, Agent SDK, containers, programmatic tool use, tool search, skills and more. Once you fully integrate you will not switch. Also being capital intensive is a form of moat.I think we will end up with market similar to cloud computing. Few big players with great margins creating cartel.
by Chyzwar
12/31/2025 at 8:08:31 AM
>Anthropic is building moat around theirs models with claude code, Agent SDK, containers, programmatic tool use, tool search, skills and more.I think this is something the other big players could replicate rapidly, even simulating the exact UI, interactions, importing/exporting existing items, etc. that people are used to with claude products. I don't think this is that big of a moat in the long run. Other big players just seem to be carving up the landscape and see where they can can fit in for now, but once resource rich eyes focus on them, Anthropic's "moat" will disappear.
by mhuffman
12/31/2025 at 11:32:20 AM
I thought that, too, but lately I've been using OpenCode with Claude Opus, rather than Claude Code, and have been loving it.OpenCode has LSPs out of the box (coming to Claude Code, but not there yet), has a more extensive UI (e.g. sidebar showing pending todos), allows me to switch models mid-chat, has a desktop app (Electron-type wrapper, sure, but nevertheless, desktop; and it syncs with the TUI/web versions so you can use both at the same time), and so on.
So far I like it better, so for me that moat isn't that. The technical moat is still the superiority of the model, and others are bound to catch up there. Gemini 3 Preview is already doing better at some tasks (but frequently goes insane, sadly).
by atombender
1/1/2026 at 3:41:53 AM
LibreOffice didn't replace MS Office, and Octave didn't replace Matlab. It seems to me that there is even less of a moat with these products than there is with Claude Code, yet neither was commoditized.by yellowcake0
1/1/2026 at 12:58:12 PM
Google Workspace replaced Microsoft Office. It has around 70% market share. Microsoft Office is still dominant in much of the traditional enterprise, but the moat is shrinking.I can use Claude in Jetbrains IntelliJ and in Zed, I can use it with OpenCode, and there are lots of other agent tools. Everyone can build these tools around an LLM, and they're already being commodified.
The moat right now is the quality of the model, not the client. Opus is just so much better than the competitors, at least for now.
by atombender
1/1/2026 at 8:29:58 PM
Enterprise is what determines commoditization as that's where the lion's share of the revenue comes from. Maybe it will eventually happen to MS Office, maybe it won't, but until it happens it hasn't happened. Access to the full MS ecosystem, technical support and seamless integration, these things matter a lot to businesses, and I'm not even saying you're wrong, but I'm not convinced yet that something similar won't play out with coding agents.by yellowcake0
12/31/2025 at 10:15:17 PM
Does OpenCode work with their subscription Max plan or is it API pricing only?by dbbk
1/1/2026 at 10:44:21 PM
It can, but the auth & communication to Anthropic's APIs is basically reverse engineered from Claude Code. While it works, and it seems Antropic is choosing to look the other way, it _may_ result in your account getting banned, as I'm pretty sure it's against their TOS.I haven't experienced this myself, but RooCode does something similar to OpenCode's approach and the maintainer has reported some bans [1].
Google on the other hand, is being very strict about keeping you locked in to their tools, unless you use API keys, of course.
[1] https://github.com/RooCodeInc/Roo-Code/pull/10077#issuecomme...
by gck1
12/31/2025 at 10:17:00 PM
Pretty sure it does. It just uses whatever you log in as.by atombender
12/31/2025 at 2:38:23 PM
> coming to Claude Code, but not there yetWasn't this released a couple of weeks ago?
by jghn
1/1/2026 at 12:59:23 PM
Also: Claude has already asked me several times the last few days if I want to install an LSP for various things. I've not seen any signs of LSP use yet.by atombender
12/31/2025 at 3:16:39 PM
It was added in 2.0.74 according to the changelog, but it's not functional: https://news.ycombinator.com/item?id=46357036. I've certainly not been able to use it.by atombender
12/31/2025 at 9:20:42 AM
A GPT wrapper isn't a moat.by iLoveOncall
12/31/2025 at 12:22:16 PM
A generic wrapper is not a moat, but the context is. Both the LLM provider and the wrapper provider depend on local context for task activities. The value flows to the context, the LLMs and wrappers are commodities. Who sets the prompts stands to benefit, not who serves AI services.by visarga
12/31/2025 at 11:01:05 AM
Most things are wrappers around RDBMSs.by bogdan
12/31/2025 at 2:50:48 PM
Most true and interesting comment I've read on HN in a while!by blinding-streak
12/31/2025 at 9:35:12 PM
Except most of their product line is oriented towards software development which has historically been dominated by free software. I don't see developers moving away from this tendency and IMO Anthropic will find themselves in a similar position to JetBrains soon enough (profitable, but niche)... assuming things pan out as you describe.by eikenberry
12/31/2025 at 1:03:54 PM
If AI is capable of doing what they claim then these aren‘t moats because they are just one prompt away from being replicated.by croes
12/31/2025 at 10:59:09 AM
All those features are basically prompts in various formats, not much of a moat.by mupuff1234
12/31/2025 at 11:19:02 AM
Like railroads, internet, electricity, aviation or car industries before: they've all been indeed the future, and they all peaked (in relative terms), at the very early stages of these industries future.And among them the overwhelming majority of companies in the sectors died. Out of the 2000ish car-related companies that existed in 1925 only 3 survived to today. And none of those 3 ended up a particularly good long term investment.
by epolanski
12/31/2025 at 1:32:53 AM
I, personally, use chatGPT for search more than I do Google these days. It, more often than not, gives me more exact results based on what I'm looking for and it produces links I can visit to get more information. I think this is where their competitive advantage lies if they can figure out how to monetize that.by phyzix5761
12/31/2025 at 1:53:35 AM
We don’t need anecdotes. We have data. Google has been announcing quarter after quarter of record revenues and profits and hasn’t seen any decrease in search traffic. Apple also hinted at the fact that it also didn’t see any decreased revenues from the Google Search deal.AI answers is good enough and there is a long history of companies who couldn’t monetize traffic via ads. The canonical example is Yahoo. Yahoo was one of the most traffic sites for 20 years and couldn’t monetize.
2nd issue: defaults matter. Google is the default search engine for Android devices, iOS devices and Macs whether users are using Safari or Chrome. It’s hard to get people to switch
3rd issue: any money that OpenAI makes off search ads, I’m sure Microsoft is going to want there cut. ChatGPT uses Bing
4th issue: OpenAIs costs are a lot higher than Google and they probably won’t be able to command a premium in ads. Google has its own search engine, its own servers, its own “GPUs” [sic],
5th: see #4. It costs OpenAI a lot more per ChatGPT request to serve a result than it costs Google. LLM search has a higher marginal cost.
by raw_anon_1111
1/1/2026 at 3:56:19 PM
Given the audience here vs the general population, I can't help but wonder if it's just the alternate search engines like DuckDuckGo/Kagi/Bing that are losing search traffic. From the population sizes even the alternate ones that are Google-based might just not be enough to be visible in Google's numbers.by Izkata
12/31/2025 at 3:56:06 PM
> Google has been announcing quarter after quarter of record revenues and profits and hasn’t seen any decrease in search traffic.this kind of things may take some time to spread across population
by matkoniecz
12/31/2025 at 1:54:53 AM
I personally know people that used ChatGPT a lot but have recently moved to using Gemini.There’s a couple of things going on but put simply - when there is no real lock in, humans enjoy variety. Until one firm creates a superior product with lock in, only those who are generating cash flows will survive.
OAI does not fit that description as of today.
by sod22
12/31/2025 at 1:47:22 AM
I'm genuinely curious. Why do you do this instead of Google Searches which also have an AI Overview / answer at the top, that's basically exactly the same as putting your search query into a chat bot, but it ALSO has all the links from a regular Google search so you can quickly corroborate the info even using sources not from the original AI result (so you also see discordant sources from what the AI answer had)?by aprilthird2021
12/31/2025 at 2:19:34 AM
The regular google search AI doesn’t do thinky thinky mode. For most buying decisions these days I ask ChatGPT to go off and search and think for a while given certain constraints, while taking particular note of Reddit and YouTube comments, and come back with some recommendations. I’ve been delighted with the results.by thom
12/31/2025 at 3:06:49 PM
I chuckled at "thinky thinky" mode. If some ai company used this branding it would win a lot of hearts and minds methinksby blinding-streak
12/31/2025 at 3:57:16 PM
It could backfire badly."Some messages are tough to write, let's thinky-think it through together."
"Golly, your codey-code could use some help. Ask me what I thinky-think!"
"I'm here to help you thinky-think while you worky-work!"
by heresie-dabord
12/31/2025 at 6:46:14 AM
I wouldn’t be surprised if ChatGPT was Pareto optimal for buying decisions… but I suspect there are a whole pile of Pareto optimal ways to make buying decisions, including “buy one of the Wirecutter picks” or “buy whatever Costco is selling”.by Marsymars
12/31/2025 at 3:21:47 PM
Even in the case where you have a good shortlist of items, the ability to then ask follow up questions in a conversational format is very useful for me. Anyway, just explaining why one might use ChatGPT for this rather than the Google search box, obviously your mileage is welcome to vary.by thom
1/1/2026 at 3:11:10 PM
I thought the same.Have you thought that there was a massive physical infrastructure left behind by the original railroad builders, all compatible with future vehicles? Other companies were able to buy the railroads for low prices and use.
Large Language Models change their power consumption requirements monthly, the hardware required to run them is replaced at a rapid rate too. If it were to stop tomorrow, what would you be left with? Out of date hardware, massively wasted power, and a gigantic hole in your wallet.
You could argue you have the blueprints for LLM building, known solutions, and it could all be rebuilt. The thing is, would you want to rebuild, and invest so much again for arguably little actual, tangible output? There isn't anything you can reuse, like others that came after could reuse the railroads.
by b3nji
1/1/2026 at 6:38:57 PM
> The simple evidence for this is that everyone who has invested the same resources in AI has produced roughly the same result. OpenAI, Anthropic, Google, Meta, Deepseek, etc. There's no evidence of a technological moat or a competitive advantage in any of these companies.Practically, what I'm finding is that whenever I ask Claude to search stuff on Reddit, it can't but Gemini can. So I think the practical advantages are where certain organizations have unfair data advantages. What I found out is that LLMs work a lot better when they have quality data.
by mettamage
12/31/2025 at 1:49:37 AM
This will remain the case until we have another transformer-level leap in ML technology. I don’t expect such an advancement to be openly published when it is discovered.by variadix
12/31/2025 at 2:04:20 AM
>That doesn't mean AI is going to go away, or that it won't change the world - railroads are still here and they did change the world - but from a venture investment perspective, get ready for a massive downturn.I don't know why people always imply that "the bubble will burst" means that "literally all Ai will die out and nothing will remain that is of use". The Dotcom bubble didn't kill the internet. But it was a bubble and it burst nonetheless, with ramifications that spanned decades.
All it really means when you believe a bubble will pop is "this asset is over-valued and it will soon, rapidly deflate in value to something more sustainable" . And that's a good thing long term, despite the rampant destruction such a crash will cause for the next few years.
by johnnyanmac
12/31/2025 at 6:39:50 AM
But some people do believe that AI is all hype and it will all go away. It’s hard to find two people who actually mean the same thing when they talk about a “bubble” right now.by mr_toad
12/31/2025 at 2:39:57 PM
I don't think anyone seriously believes AI will disappear without a trace. At the very least, LLMs will remain as the state of the art in high-level language processing (editing, translation, chat interfaces, etc.)The real problem is the massive over-promises of transforming every industry, replacing most human labor, and eventually reaching super-intelligence based on current models.
I hope we can agree that these are all wholly unattainable, even from a purely technological perspective. However, we are investing as if there were no tomorrow without these outcomes, building massive data-centers filled with "GPUs" that, contrary to investor copium, will quickly become obsolete and are increasingly useless for general-purpose datacenter applications (Blackwell Ultra has NO FP64 hardware, for crying out loud...).
We can agree that the bubble deflating, one way or another, is the best outcome long term. That said, the longer we fuel these delusions, the worse the fallout will be when it does. And what I fear is that one day, a bubble (perhaps this one, perhaps another) will grow so large that it wipes out globalized free-market trade as we know it.
by latchup
12/31/2025 at 2:50:37 PM
> Blackwell Ultra has NO FP64 hardware, for crying out loud...But can it run Crysis?
by zozbot234
12/31/2025 at 9:05:29 PM
Games tend to avoid FP64 compute as Nvidia has always gimped it in consumer GPUs, so you are somewhat lucky there. "Lucky" as in, you get to enjoy the broken-ass, glitchy FP32 physics that we've all grown to love so much.However, if you actually need the much higher precision of FP64 for scientific computing (like most non-AI data center users do) and extremely slow emulation is not an option, consider yourself fucked.
by latchup
12/31/2025 at 8:57:18 PM
Bubbles bursting aren't bad unless you were overinvested in the bubble. Consider that you'll be wiping your ass with DIMMs once this one bursts; I can always put more memory to good use.by juped
12/31/2025 at 9:40:02 PM
> Bubbles bursting aren't bad unless you were overinvested in the bubble.That's what I am trying to say: every big technology player, every industry, every government is all in on AI. That means you and I are along for the ride, whether we like it or not.
> Consider that you'll be wiping your ass with DIMMs once this one bursts; I can always put more memory to good use.
Except you can't, because DRAM makers have almost entirely pivoted from making (G)DDR chips to making HBM instead. HBM must be co-integrated at the interposer level and 3D-stacked, resulting in terrible yield. This makes it extremely pricy and impossible to package separately (no DIMMs).
So when I say the world is all in on this, I mean it. With every passing minute, there is less and less we can salvage once this is over; for consumer DRAM, it's already too late.
by latchup
1/1/2026 at 8:43:13 AM
Also what will happen if/when a different lab (or a current lab) develop a new architecture that can actually achieve AGI?The other, highly invested, companies (if openai and anthropic) may be in for a free fall.
You never wake to be left in the wake of "the next big thing".
by scrollop
1/1/2026 at 7:24:10 AM
> The conclusion? AI is a world-changing technology, just like the railroads were, and it is going to soon explode in a huge bubble - just like the railroads did.Why "soon"? All your arguments may be correct, but none of them imply when the pending implosion will happen.
by runeks
12/31/2025 at 8:16:19 AM
> AI is a world-changing technology, just like the railroads wereThis comparison keeps popping up, and I think it's misleading. The pace of technology uptake is completely different from that of railroads: the user base of ChatGPT alone went from 0 to 200 million in nine months, and it's now- after just three years- around 900 million users on a weekly basis. Even if you think that railroads and AI are equally impactful (I don't, I think AI will be far more impactful) the rapidity with which investments can turn into revenue and profit makes the situation entirely different from an investor's point of view.
by throw310822
12/31/2025 at 10:34:11 AM
Railroads carried the goods that everybody used. That’s like almost 100% in a given country.The pace was slower indeed. It takes time to build the railroads. But at that time advancements also lasted longer. Now it is often cash grabs until the next thing. Not comparable indeed but for other reasons.
by freehorse
12/31/2025 at 9:38:15 AM
> just three years- around 900 million users on a weekly basis.Well, I rotate about a dozen of free accounts because I don't want to send 1 cent their way, I imagine I'm not the only one. I do the same for gemini, claude and deepseek, so all in all I account for like 50 "unique" weekly users
Apparently they have about 5% of paying customers, the amount of total users is meaningless, it just tells you how much money they burn and isn't an indication of anything else.
by lm28469
12/31/2025 at 9:57:01 AM
> I rotate about a dozen of free accounts .. I do the same for gemini, claude and deepseekFor someone who doesn't like the product and doesn't care about it, you surely make a lot of effort to use it.
by throw310822
12/31/2025 at 10:46:31 AM
Sometime you have to force the trickle down economy a bit, these people are destroying my industry I might as well cost them as much as possible before I have no choice but to move on.It's also literally 0 effort, click > sign out > click > sign in. It saves me $200 a month, that's not too far from half of my rent
by lm28469
12/31/2025 at 10:58:40 AM
I can understand the spirit, though this reinforces my impression that the product is so good that people jump through hoops to use it, even if they hate it in principle. If they suddenly cut off any free access to it, how much would you be willing to pay per month to keep using it? One dollar? Ten? Twenty?Also, maybe I'm missing something, but no amount of free accounts on ChatGPT gives you what you get with a paid subscription, especially with a $200 one; and there's paid plans from just $8/month.
by throw310822
12/31/2025 at 5:27:07 PM
I like movies, I still torrent them, if tomorrow a police officer is being my back 24/7 I will just stop torrenting, but I still won't pay $20 a pop to go to the cinema.> Also, maybe I'm missing something, but no amount of free accounts on ChatGPT gives you what you get with a paid subscription, especially with a $200 one
These days I'm mostly running opus 4.5 through "antigravity" and I'd rather become a potatoe farmer than give $8 to Altman
by lm28469
12/31/2025 at 7:19:40 PM
It's a really tiresome conversation, and I should just stop replying, but...If you have to stop torrenting it doesn't mean that you have to pay $20 per movie. There is a price >0 that you're willing to pay to do something you love. On youtube there's a lot of movies for 4 or 5 dollars.
I'm also using Claude, both through Cursor (paid by my company) and privately (paid by me, $20/ month).
by throw310822
12/31/2025 at 10:55:14 AM
I'm going to go out on a limb here and say that users who put that much effort into using this stuff for free, using a dozen different accounts, are very rare.by mikkupikku
12/31/2025 at 1:55:55 PM
> user base of ChatGPT alone went from 0 to 200 million in nine months, and it's now- after just three years- around 900 million users on a weekly basis.Doesn't have anything to do with AI itself. Consider Instagram then TikTok before this, WhatsApp before, etc. There is a clear adoption curve timeline : it's going WorldWide faster. AI is not special in that sense. It doesn't mean AI itself isn't special (arguable, in fact Narayanan precisely argue it's "normal") but rather than adoption pace is precisely on track with everything else.
by utopiah
12/31/2025 at 10:32:22 AM
It is beside the point, but> I think AI will be far more impactful
is not correct IMO. Those are two very different areas. The impact of railroads on transport and everything transport-related cannot be understated. By now roads and cars have taken over much of it, and ships and airplanes are doing much more, but you have to look at the context at the time.
by nosianu
12/31/2025 at 8:52:59 AM
Paid user base or free user base? Because free user base on a very expensive product is next to meaningless.by shaky-carrousel
12/31/2025 at 9:17:36 AM
It's meaningful because it shows that people like the product a lot, and for a lot of different reasons. There are only few products that can reach such market penetration, not to mention in only three years. As the quality of AI increases, people will quickly realise that they are willing to pay for it as much as they pay for electricity. And the same goes for businesses.by throw310822
12/31/2025 at 1:20:34 PM
They like the AI chat. Not ChatGPT. AI chats are interchangeable.by shaky-carrousel
12/31/2025 at 3:41:04 PM
Indeed, but in the end they all have to cover their costs. People are already getting real, measurable value out of them and they will be willing to pay for it like they pay for utilities. Though I'm not excluding that the AI companies will manage to create some kind of moat to keep their customers (such as personalisation, memory, etc.).by throw310822
12/31/2025 at 2:24:21 PM
Isn't that akin to a 1990s tech model like CompuServe or AOL? "Let's create this awesome environment where people will want to pay us for this wonderful service, we'll send them a CD in the mail to get them started withh a free month, then charge $0.30/minute. We'll make a fortune!"by gosub100
12/31/2025 at 8:30:01 AM
Railroads enabled people and goods to move from one place to another much easier and faster.AI enables people to... produce even more useless slop than before?
by steve1977
12/31/2025 at 8:33:08 AM
At this point I'm taking the word "slop" as a sign meaning "I really didn't think this through and I'm just autocompleting based on a gut feeling and the first word that comes to mind".by throw310822
12/31/2025 at 2:13:32 PM
Well unfortunately for you, it has a precisely defined and well-understood meaning for all those not covering their eyes and ears in denial. Quoting Merriam-Webster:> Digital content of low quality that is produced usually in quantity by means of artificial intelligence.
Chosen by the editors as word of the year, by the way.
by latchup
12/31/2025 at 3:27:41 PM
"Slop" in English means "liquid junk, rubbish, tripe". No need to call for Merriam Webster's help.The point is that AI can produce slop (as people do, too), but it's just silly to imply that everything it can produce is slop. That's just lazy, sloppy thinking.
by throw310822
12/31/2025 at 3:57:31 PM
Sure. I'm fully aware that AI can be useful, especially once we move past LLMs.However, I do think that the majority (or mainstream) use of GenAI today is indeed not very useful or even harmful. And I do think that something like railroads are more useful by orders of magnitude.
by steve1977
12/31/2025 at 4:13:21 PM
> I do think that the majority (or mainstream) use of GenAI today is indeed not very useful or even harmful.What are you basing this opinion on?
by bdangubic
1/1/2026 at 12:08:00 AM
The results that I'm seeing of what people are doing with GenAI. I don't really see the promised productivity gains.by steve1977
12/31/2025 at 4:10:49 PM
> but it's just silly to imply that everything it can produce is slopthat is why people use slop qualifier, rather than not using qualifier
by matkoniecz
12/31/2025 at 8:48:11 AM
That's an easy way out, isn't it?by steve1977
12/31/2025 at 10:57:16 AM
Using thought terminating clichés in general is, and that can include "slop".by mikkupikku
12/31/2025 at 1:19:22 PM
It wasn't meant to be thought terminating. If anything, it was a bit provocative.by steve1977
12/31/2025 at 11:16:29 AM
The irony of atacking thought terminating clichés while defending slopby adammarples
12/31/2025 at 2:32:00 PM
I have no love for slop. But I also don't write off the whole technology as only being good for slop.by mikkupikku
12/31/2025 at 5:23:30 AM
This is different because now the cats out of the bag: AI is big money!I don't expect AGI or Super intelligence to take that long but I do think it'll happen in private labs now. There's an AI business model (pay per token) that folks can use also.
by parentheses
12/31/2025 at 10:12:00 AM
> don't expect AGI or Super intelligence to take that longI appreciate the optimism for what would be the biggest achievement (and possibly disaster) in human history. I wish other technologies like curing cancer, Alzheimer's, solving world hunger and peace would have similar timelines.
by oblio
12/31/2025 at 2:45:48 PM
We are making decent strides on the first two, the latter two are like wanting cats to stop scratching. What's the point of being a cat than?by barrenko
12/31/2025 at 5:33:24 AM
I think we'll find that that asymptote only holds for cases where the end user is not really an active participant in creating the next model:- take your data
- make a model
- sell it back to you
Eventually all of the available data will have been squeezed for all it's worth the only way to differentiate oneself as an AI company will be to propel your users to new heights so that there's new stuff to learn. That growth will be slower, but I think it'll bear more meaningful fruit.
I'm not sure if today's investors are patient enough to see us through to that phase in any kind of a controlled manner, so I expect a bumpy ride in the interim.
by __MatrixMan__
12/31/2025 at 5:45:56 AM
Yeah except that models don't propel communities towards new heights. They drive towards the averages. They take from the best to give to the worst, so that as much value is destroyed as created. There's no virtuous cycle there...by conartist6
12/31/2025 at 6:04:02 AM
Is that constraint fundamental to what they are? Or are they just reflecting the behavior of markets when there's low hanging fruit around?When you look at models that were built for a specific purpose, closely intertwined with experts who care about that purpose, they absolutely propel communities to new heights. Consider the impact of alphafold, it won a Nobel prize, proteomics is forever changed.
The issue is that that's not currently the business model that's aimed at most of us. We have to have a race to the bottom first. We can have nice things later, if we're lucky, once a certain sort of investor goes broke and a different sort takes the helm. It's stupid, but its a stupidity that predates AI by a long shot.
by __MatrixMan__
1/1/2026 at 3:27:18 AM
I don't know anything about the field but apparently AlphaFold having "solved the problem of protein folding" is overhyped? https://old.reddit.com/r/bioinformatics/comments/1e0s55e/did...by snigsnog
12/31/2025 at 1:46:33 PM
Experts making a specialized model isn't an example of an AI contributing value to society. All the value a model can offer comes from one of exactly two places: the person building the model, or the people the model trained on.We know that the model training on the model training on the model leads to model collapse...
by conartist6
12/31/2025 at 5:55:57 PM
Your word choice implies a very zero-sum perspective on value.Value is determined by what we value, it's a choice. If a bunch of scientists value good approximations for how a protein will fold, and then a model generates more such things in a year than those scientists could make in a century, that's a lot of value. Not extracted from anyone, created.
by __MatrixMan__
12/31/2025 at 6:56:38 PM
Yes. Value created by the people who made the data the model trained on, and the people who ensured that the training created a good model. I'm just saying it's not magic, just another kind of high-level work that people do.by conartist6
12/31/2025 at 9:00:20 AM
> The simple evidence for this is that everyone who has invested the same resources in AI has produced roughly the same result.I think this conflates together a lot of different types of AI investment - the application layer vs the model layer vs the cloud layer vs the chip layer.
It's entirely possible that it's hard to generate an economic profit at the model layer, but that doesn't mean that there can't be great returns from the other layers (and a lot of VC money is focused on the application layer).
by nr378
12/31/2025 at 9:04:28 AM
Whilst those other layers are useful, none of them are particularly hard to build or rebuild when you have many millions of dollars on hand.One doesn't need tens of billions for them.
by londons_explore
12/31/2025 at 9:59:45 AM
Yeah, because making good chips (TPU) and compilers (XLA) is notoriously easy, right?by tucnak
12/31/2025 at 8:50:05 PM
All of that is below the modelby londons_explore
12/31/2025 at 3:49:05 PM
> The simple evidence for this is that everyone who has invested the same resources in AI has produced roughly the same result. OpenAI, Anthropic, Google, Meta, Deepseek, etc. There's no evidence of a technological moat or a competitive advantage in any of these companies.I think this is analysis is too surface level. We are seeing Google Gemini pull away in terms of image generation, and their access to billions of organic user images gives them a huge moat. And in terms of training data, Google also has a huge advantage there.
The moat is the training data, capital investment, and simply having a better AI that others cannot recreate.
I don't see how Google doesn't win this thing.
by jklinger410
12/31/2025 at 5:09:48 AM
The "Railway Bubble" analogy is spot on.As a loan officer in Japan who remembers the 1989 bubble, I see the same pattern. In the traditional "Shinise" world I work with, Cash is Oxygen. You hoard it to survive the inevitable crash. For OpenAI, Cash is Rocket Fuel. They are burning it all to reach "escape velocity" (AGI) before gravity kicks in.
In 1989, we also bet that land prices would outrun gravity forever. But usually, Physics (and Debt) wins in the end. When the railway bubble bursts, only those with "Oxygen" will survive.
by 578_Observer
12/31/2025 at 6:55:54 AM
I‘m aware this means leaving the original topic of this thread, but would you mind giving us a rundown of this whole Japan 1989 thing? I would love to read a first-person account.by ManuelKiessling
12/31/2025 at 8:10:50 AM
I am honored to receive a question from a fellow "Craftsman" (I assume from your name).To be honest, in 1989, I was just a child. I didn't drink the champagne. But as a banker today, I am the one cleaning up the broken glass. So I can tell you about 1989 from the perspective of a "Survivor's Loan Officer."
I see two realities every day.
One is the "Zombie" companies. Many SMEs here still list Golf Club Memberships on their books at 1989 prices. Today, they are worth maybe 1/20th of that value. Technically, these companies are insolvent, but they keep the "Ghost of 1989" on the books, hoping to one day write it off as a tax loss. It is a lie that has lasted 30 years.
But the real estate is even worse. I often visit apartment buildings built during the bubble. They are decaying, and tenants have fled to newer, modern buildings. The owner cannot sell the land because demolition costs hundreds of thousands of dollars—more than the land is worth.
The owner is now 70 years old. His family has drifted apart. He lives alone in one of the empty units, acting as the caretaker of his own ruin.
The bubble isn't just a graph in a history book. It is an old man trapped in a concrete box he built with "easy money." That is why I fear the "Cash Burn" of AI. When the fuel runs out, the wreckage doesn't just disappear. Someone has to live in it.
by 578_Observer
1/1/2026 at 8:33:37 AM
> I am honored to receive a question from a fellow "Craftsman" (I assume from your name).Mh, not sure what you mean: „Manuel“ and „Kießling“ are literally my first and last name.
by ManuelKiessling
12/31/2025 at 10:39:08 AM
I nominate Sam Altman to be that Someone.by octoberfranklin
12/31/2025 at 12:21:53 PM
That is an interesting nomination.But in my experience as a banker, the ones left in the wreckage are rarely the ones who drank the champagne. It is usually the ones who were hired to clean the glasses.
I hope history proves me wrong this time.
by 578_Observer
12/31/2025 at 4:55:31 PM
Beautiful writing, poetic.I've always had a morbid fascination with financial bubbles and the Japanese one of the late 1980s might be the most epic in history (definitely in modern times at least).
by ifwinterco
12/31/2025 at 5:21:18 PM
"Spectacular" is an interesting word choice. To be honest, for us on the ground, it just feels like cleaning up a very long party that ended 30 years ago.But I appreciate your perspective. It is refreshing to know that someone finds a poetic texture in what I simply call "bad loans."
by 578_Observer
12/31/2025 at 10:40:55 AM
> Cash is Oxygen. You hoard it to survive the inevitable crash. For OpenAI, Cash is Rocket Fuel. They are burning it all to reach "escape velocity" (AGI) before gravity kicks in.For OpenAI, cash is oxygen too; they're burning it all to reach escape velocity. They could use it to weather the upcoming storm, but I don't think they will.
by lelanthran
12/31/2025 at 12:23:49 PM
Exactly. They have chosen to burn the lifeboats to power the engine.It is a magnificent gamble. If they reach escape velocity (AGI), they own the future. But if they run out of fuel mid-air, gravity is unforgiving.
As a loan officer, I prefer businesses that don't need to leave the atmosphere to survive.
by 578_Observer
12/31/2025 at 3:51:21 PM
I like to tell people that all the AI stuff happening right now is capitalism actually working as intended for once. People competing on features and price where we arent yet in a monopoly/duopoly situation yet. Will it eventually go rotten? Probably — but it's nice that right now for the first time in a while it feels like companies are actually competing for my dollar.by kkukshtel
12/31/2025 at 6:01:01 PM
Aaahh the beautiful free market where the energy prices keep increasing and if it all fails they will be saved by the government that they bribed before. Don't forget the tax subsidies. AKA your money. Pure honest capitalism....by the_overseer
12/31/2025 at 2:51:25 AM
People seem to have the assumption that OpenAI and Anthropic dying would be synonymous with AI dying, and that's not the case. OpenAI and Anthropic spent a lot of capital on important research, and if the shareholders and equity markets cannot learn to value and respect that and instead let these companies die, new companies will be formed with the same tech, possibly by the same general group of people, thrive, and conveniently leave out the said shareholders.Google was built on the shoulders of a lot of infrastructure tech developed by former search engine giants. Unfortunately the equity markets decided to devalue those giants instead of applaud them for their contributions to society.
by dheera
12/31/2025 at 3:50:22 AM
You weren’t around pre Google were you? The only thing Google learned from other search engines is what not to do - like rank based on the number of times a keyword appeared and not to use expensive bespoked serversby raw_anon_1111
12/31/2025 at 4:52:41 AM
I was around pre-Google.Ranking was Google's 5% contribution to it. They stood on the shoulders of people who invented physical server and datacenter infrastructure, Unix/Linux, file systems, databases, error correction, distributed computing, the entire internet infrastructure, modern Ethernet, all kinds of stuff.
by dheera
12/31/2025 at 4:56:59 AM
And none of that had to do with learning from other search engines…by raw_anon_1111
12/31/2025 at 7:05:16 AM
Eh ... I question that 5% ranking is google's only contribution, even if it was important.Everyone stood on the shoulders of file systems and databases, ethernet (and firewalls and netscreens, ...) Well, maybe a few stood on the shoulder of PHP.
Google did in fact pretty much figure out how to scale large number of servers (their racking, datacenters, clustering, global file systems etc) before most others did. I believe it was their ability to run the search engine cheap enough that enabled them to grow while largely retaining profitability early on.
by golem14
1/1/2026 at 4:36:06 PM
More specifically on that last point, I remember reading something like Google's biggest contribution hardware-wise was using lots of cheap, easliy-replaced distributed storage with redundancy instead of expensive large singular storage with error-correction? Or maybe it was memory and not storage. Whatever it was I remember them not caring as much about error correction as others, and being able to use relatively cheap hardware because of it.by Izkata
12/31/2025 at 8:50:42 AM
Yeah, I remember the moment search engines invented computing, I cannot look at sand the same way anymore /sby ashirviskas
12/31/2025 at 3:02:24 AM
Isn't it really the other way around? Not to say OpenAI and Anthropic haven't done important work, but the genesis of this entire market was paper on attention that came out of Google. We have the private messages inside OpenAI saying they needed to get to market ASAP or Google would kill them.by tootie
12/31/2025 at 10:59:36 AM
Have you thought about what happens if we get a new improvement in model architecture like transformers that grows the compute needs even furtherby sunchit
12/31/2025 at 2:17:55 AM
If performance indeed asymptotes, and if we are not at the end of silicon scaling or decreasing cost of compute, then it will eventually be possible to run the very best models at home on reasonably priced hardware.Eventually the curves cross. Eventually the computer you can get for, say, $2000, becomes able to run the best models in existence.
The only way this doesn’t happen is if models do not asymptote or if computers stop getting cheaper per unit compute and storage.
This wouldn’t mean everyone would actually do this. Only sophisticated or privacy conscious people would. But what it would mean is that AI is cheap and commodity and there is no moat in just making or running models or in owning the best infrastructure for them.
by api
12/31/2025 at 7:49:39 AM
Or the airlines. Airlines have created a huge amount of economic value that has mostly been captured by other entities.by matwood
12/31/2025 at 2:09:48 PM
very few software has commoditized, doubt it will be the fate of AI tech stack.by trgn
12/31/2025 at 3:05:36 PM
Perhaps it would be useful to define what we mean by "commoditization" in terms of software. I would say a software product that is not commoditized is one where the brand still can command a premium, which in the world of software, generally means people are willing to pay non-zero dollars for it. Once software is commoditized it generally becomes free or ad-supported or is bundled with another non-software product or service. By this standard I would say there are very few non-commoditized consumer software products. People pay for services that are delivered via software (e.g. Spotify, Netflix) but in this case the software is just the delivery mechanism, not the product. So perhaps one viable path for chatbots to avoid commoditization would be to license exclusive content, but in this scenario the AI tech itself becomes a delivery mechanism, albeit a sophisticated one. Otherwise it seems selling ads is the only viable strategy, and precedents show that the economics of that only work when there is a near monopoly (e.g. Meta or Google). So it seems unlikely that a lot of the current AI companies will survive.by o_nate
12/31/2025 at 3:20:50 AM
Um meta didn't achieve the same results yet. And does it matter if they can all achieve the same results if they all manage high enough payoffs? I think subscription based income is only the beginning. Next stage is AI-based subcompanies encroaching on other industries (e.g. deepmind's drug company)by Davidzheng
12/31/2025 at 2:32:43 AM
Also that open source models are just months behindby guluarte
12/31/2025 at 12:38:21 AM
Just in time for a Government guaranteed backstop.by ares623
12/31/2025 at 5:08:03 AM
I’m waiting to get an RTX 5090 on the cheap.by BenFranklin100
12/31/2025 at 5:19:25 AM
A penny saved is a penny earnedby 2OEH8eoCRo0
12/31/2025 at 12:41:46 AM
Massive upfront costs and second place is just first loser. It’s like building fabs but your product is infinitely copyable. Seems pretty rough.by bee_rider
12/31/2025 at 1:14:11 AM
What exactly is "second" place? No-one really knows what first place looks like. Everyone is certain that it will cost an arm, a leg and most of your organs.For me, I think that, the possible winners will be close to fully funded up front and the losers will be trying to turn debt into profit and fail.
The rest of us self hoster types are hoping for a massive glut of GPUs and RAM to be dumped in a global fire sale. We are patient and have all those free offerings to play with for now to keep us going and even the subs are so far somewhat reasonable but we will flee in droves as soon as you try to ratchet up the price.
It's a bit unfortunate but we are waiting for a lot of large meme companies to die. Soz!
by gerdesj
12/31/2025 at 3:18:45 AM
You and the other hobbyists aren't what's driving valuations. Enterprise subscriptions are.by vkou
12/31/2025 at 4:55:07 AM
OpenAI is 80% consumer subsby ljlolel
12/31/2025 at 5:34:30 AM
And do those subs justify their valuation?They don't, the only thing that can justify it is if they get themselves into every business workflow. That's what the investors are counting on.
by vkou
12/31/2025 at 1:54:18 AM
First place looks a lot like Google…by raw_anon_1111
12/31/2025 at 6:54:16 AM
I still don't understand how it's world-changing apart from considerably degrading the internet. It's laughable to compare it to railroads.by zeofig
12/31/2025 at 2:17:21 PM
Translation is big thing, maybe not the same scale as railroads, but still important. The rest is of dubious economic utility (as in you can do it with LLM easier than without, but if you think a little you could just as well not do it at all without losing anything). On the other hand, disrupting signalling will have pretty long-lasting consequences. People used to assume that a long formal-sounding text is a signal of seriousness, certainly so if it's personally addressed. Now it's just a sign of sloppiness. School essays are probably dead as a genre (good riddance). Hell, maybe even some edgy censorable language will enter mainstream as a definite proof of non-LLMness - and stay.by tliltocatl
12/31/2025 at 7:37:30 AM
Did you try asking chatgpt to explain?by kolinko
12/31/2025 at 7:18:48 PM
When it gets a bit better two robots can make four robots and so on to infinity.by tim333
12/31/2025 at 2:13:39 AM
AI is capital intensive because autodiff kinda sucks.by adamnemecek
12/31/2025 at 10:50:55 AM
Deepseek has invested the same amount as OpenAI?by cma
12/31/2025 at 3:22:41 PM
This is so obviously right.I may add that investors are mostly US-centric, and so will the bubble-bursting chaos that ensues.
by louiskottmann
12/31/2025 at 9:32:42 AM
Eh, I wouldn't be so sure, chips with brain matter and or light are on its way and or quantum chips, one of those or even a combination will give AI a gigantic boost in performance. Finally replacing a lot more humans and whoever implements it first will rule the world.by Bombthecat
12/31/2025 at 10:13:05 AM
> chips with brain matter and or lightThe... what now?
by oblio
1/1/2026 at 1:16:39 PM
https://www.technologyreview.com/2023/12/11/1084926/human-br...They are getting better, faster etc etc.
And I get down voted again for the truth people don't want to hear lol
by Bombthecat
1/1/2026 at 8:48:26 PM
You're not technically wrong, but that looks like tech that might be available decades from now. Even quantum computers might be decades away from practical applications, even though we've been trying to produce prototypes for decades now. Light based (optical) computers are in the same place or probably even further behind.They'll probably all help in the future, but the current LLM craze doesn't seem to be helped by them at this moment in time, and an economic cycle like this has a boom phase of at most 10 years. So any changes in basic computing hardware will probably help with the next LLM++ tech.
by oblio
12/31/2025 at 9:34:39 AM
You seem to have forgotten that the ruling class requires tax payers to fund their incomes. If we're all out of work, there's nobody to buy their products and keep them rich.by nineteen999
12/31/2025 at 11:43:04 AM
Not sure this equation works out. If demand for labor goes towards zero it really means there is no demand. In other words, when AI and robots fulfil every desire of their owners there really is no need for “tax payers”by CuriousSkeptic
12/31/2025 at 10:46:09 PM
If you really think 8 billion people are going to not tear the arms and legs off their overlords and their robot minions before that you're completely daft.by nineteen999
12/31/2025 at 9:14:17 AM
for me its clear OpenAI and Anthropic have a lead. I dont buy Gemini 3 being good. it isnt. whatever the benchmark said. same for meta and deepseek.by retinaros
12/31/2025 at 4:31:25 AM
This is why I think China will win the AI race. As once it becomes a commodity no other country is capable of bringing down manufacturing and energy costs the way China is today. I am also rooting for them to get on parity with node size for chips for the same reason as they can crash the prices PC hardware.by xbmcuser
12/31/2025 at 5:51:32 AM
Did railroads change the world though?They only lasted a couple of decades as the main transportation method. I'd say the internal combustion engine was a lot more transformative.
by pier25
12/31/2025 at 6:03:12 AM
Pretty much every major historical trend of Western societies in the second half of the eighteenth century, from the development of the modern corporation to the advent of total war, was intimately tied to railroad transportation.by xhevahir
12/31/2025 at 6:48:23 AM
Transportation of people, yeah, but it still carries a majority of inter-city freight in North America.by Marsymars
12/31/2025 at 2:02:55 PM
Besides from he fact the freight is still universally carried by the rail when possible, railroads changed the world just like the vacuum valves did. If not for them nobody would invest in developing tire transport or transistors.by tliltocatl
12/31/2025 at 9:04:48 AM
Railroads built America and won multiple large wars.by oblio
12/31/2025 at 7:10:01 AM
Umm yes? The metro even if not a big deal in the states is like a small but quiet way it has changed public transport, plus moving freight, plus people over large distances, plus the bullet train that mixed luxury, speed and efficiency onto trains, all of these are quietly disruptive transformations, that I think we all take for granted.by anshumankmr
12/31/2025 at 1:46:28 AM
"AI is going to be a highly-competitive" - In what way?It is not a railroad and the railroads did not explode in a bubble (OK a few early engines did explode but that is engineering). I think LLM driven investments in massive DCs is ill advised.
by gerdesj
12/31/2025 at 1:50:45 AM
Yes they did, at least twice in the 19th century. It was the largest financial crisis before 1929by fcantournet
12/31/2025 at 2:11:29 AM
It did. I question the issue of "what problem am I trying to solve" with AI, though. Transportation across a huge swath of land had a clear problem space, and trains offered a very clear solution; created dedicated railing and you can transport 100x the resources at 10x the speed of a horseman (and I'm probably underselling these gains). In times where trekking across a continent took months, the efficiencies in communication and supply lines are immediately clear.AI feels like a solution looking for a problem. Especially with 90% of consumer facing products. Were people asking for better chatbots, or to quickly deepfake some video scene? I think the bubble popping will re-reveal some incredible backend tools in tech, medical, and (eventually) robotics. But I don't think this is otherwise solving the problems they marketed on.
by johnnyanmac
12/31/2025 at 3:09:02 AM
> AI feels like a solution looking for a problem.The problem is increasing profits by replacing paid labor with something "good enough".
by heavyset_go
12/31/2025 at 3:21:00 AM
This is a use case that hasn't yet been proven out, though. "Good enough" for an executive may not be "good enough" to keep the company solvent, and there's no shortage of private equity morons who have no understanding of their own assets.by MangoToupe
12/31/2025 at 4:07:41 AM
I agree, but it's the bet they're making. You don't end up with trillions in investment and valuations with chatbots and meme video generators.by heavyset_go
12/31/2025 at 3:54:00 AM
Doesn't sound like a very profitable problem to solve. At least, not in the long term (which no one orchestrating this is thinking in).by johnnyanmac
12/31/2025 at 4:08:55 AM
Long term is feudalism, the short term is how we get there.by heavyset_go
12/31/2025 at 4:37:10 AM
Well I wish them the worst of luck. Those doing this need to go back to the 1880s and see how that ended long term.by johnnyanmac
12/31/2025 at 7:41:01 AM
Isn’t this what industrialisation was always about?by kolinko
12/31/2025 at 11:50:08 PM
Yes, and everything around us was replaced by something "good enough" from a factory unless you paid for real craftsmanship.by heavyset_go
12/31/2025 at 2:24:09 AM
Your view is ahistorical.by aaronblohowiak
12/31/2025 at 1:57:53 AM
> There's no evidence of a technological moat or a competitive advantage in any of these companies.
I disagree based on personal experience. OpenAI is a step above in usefulness. Codex and GPT 5.2 Pro have no peers right now. I'm happy to pay them $200/month.I don't use my Google Pro subscription much. Gemini 3.0 Pro spends 1/10th of the time thinking compared to GPT 5.2 Thinking and outputs a worse answer or ignores my prompt. Similar story with Deepseek.
The public benchmarks tell a different story which is where I believe the sentiment online comes from, but I am going to trust my experience, because my experience can't be benchmaxxed.
by energy123
12/31/2025 at 2:12:22 AM
I still find it so fascinating how experiences with these models are so varied.I find codex & 5.2 Pro next to useless and nothing holds a candle to Opus 4.5 in terms of utility or quality.
There's probably something in how varied human brains and thought processes are. You and I likely think through problems in some fundamentally different way that leads to us favouring different models that more closely align with ourselves.
No one seems to ever talk about that though and instead we get these black and white statements about how our personally preferred model is the only obvious choice and company XYZ is clearly superior to all the competition.
by wild_egg
12/31/2025 at 2:26:00 AM
There is always a comment like this in these threads. It’s just 50-50 whether it’s Claude or OpenAI.by yoyohello13
12/31/2025 at 4:53:46 AM
We never hear what the actual questions are. I reckon it's Claude being great at coding in general and GPT being good at niche cases. "Spikey intelligence"by razodactyl
12/31/2025 at 2:20:43 AM
I’m not saying that no company will ever have an advantage. But with the pace of advances slowing, even if others are 6-12 months behind OpenAI, the conclusion is the same.Personally I find GPT 5.2 to be nearly useless for my use case (which is not coding).
by avalys
12/31/2025 at 8:53:19 AM
Can you give specific examples? I'm super interested to see where it fails.by ashirviskas
12/31/2025 at 4:34:32 AM
For me OpenAI is the worst of all. Claude code and Gemini deep research is much much more better in terms of quality while ChatGPT hallucinating and saying “sorry you’re right”.by import
12/31/2025 at 2:33:40 AM
codex is sooo slow but it is good at planning, opus is good at coding but not at good at seeing the big pictureby guluarte
12/31/2025 at 2:08:49 AM
I use both and ChatGPT will absolutely glaze me. I will intentionally say some BS and ChatGPT will say “you’re so right.” It will hilariously try to make me feel good.But Gemini will put me in my place. Sometimes I ask my question to Gemini because I don’t trust ChatGPT’s affirmations.
Truthfully I just use both.
by harrall
12/31/2025 at 2:12:56 AM
I told ChatGPT via my settings that I often make mistakes and to call out my assumptions. So now it1. Glazes me 2. Lists a variety of assumptions (some can be useful / interesting)
Answers the question
At least this way I don't spend a day pursuing an idea the wrong way because ChatGPT never pointed out something obvious.
by gridspy
12/31/2025 at 3:29:01 AM
Care to share the system prompt?by nubg