4/17/2026 at 4:50:23 PM
This tweet shows it as a percentage of US GDP:https://x.com/paulg/status/2045120274551423142
Makes it a little less dramatic. But also shows what a big **'n deal the railroads were!
by timmg
4/17/2026 at 10:45:37 PM
GDP adjustments are warranted, but it is more stark than both the estimates suggest.The megaprojects of the previous generations all had decades long depreciation schedules. Many 50-100+ year old railways, bridges, tunnels or dams and other utilities are still in active use with only minimal maintenance
Amortized Y-o-Y the current spends would dwarf everything at the reported depreciation schedule of 6(!) years for the GPUs - the largest line item.
by manquer
4/18/2026 at 2:04:10 AM
The side effects of spending funds on these mega projects is also something to consider. NASA spending has created a huge pile of technologies that we use day to day: https://en.wikipedia.org/wiki/NASA_spin-off_technologies.by gravypod
4/19/2026 at 2:52:19 PM
Maybe if we'll get rack-sized fusion reactors out of it, I will consider the AI/Datacenter spending craze in the same light as NASA projects. Until then, they are rich kids' vanity projects and nothing more.by tremon
4/18/2026 at 7:37:32 AM
> NASA spending has created a huge pile of technologies that we use day to dayWe're a little too early to know if that's the case here too. I do foresee a chance at a reality where AI is a dead end, but after it we have a ton of cheap GPU compute lying about, which we all rush to somehow convert into useful compute (by emulating CPU's or translating traditional algorithms into GPU oriented ones or whatever).
by delusional
4/20/2026 at 1:05:36 PM
What will happen is that new buzzwords will be invented, and a new fad will take its place. And we will be stuck with the short end of the stick again. You can hope, but shit doesn't really get cheaper for us common folk, ever. :/by vrighter
4/18/2026 at 4:10:38 PM
If all AI progress somehow immediately halted, the models that have currently been built will still have more economic impact than the Internet.Not least because the slower the frontier advances, the cheaper ASICs get on a relative basis, and therefore the cheaper tokens at the frontier get.
We have a massive scaffolding capability overhang, give it ten years to diffuse and most industries will be radically different.
Again, all of this is obvious if you spend 1k hours with the current crop, this isn’t making any capability gain forecasts.
Just for a dumb example, there is a great ChatGPT agent for Instacart, you can share a photo of your handwritten shopping list and it will add everything to your cart. Just following through the obvious product conclusions of this capability for every grocery vendor’s app, integrating with your fridge, learning your personal preferences for brands, recipe recommendation systems, logistics integrations with your forecasted/scheduled demand, etc is I contend going to be equivalent engineering effort and impact to the move from brick and mortar to online stores.
by theptip
4/18/2026 at 4:45:51 PM
i feel a lot of people in tech have this incuriously deterministic attitude about llms right now… previous <expensive capital project> revolutionized the world, therefore llms will! despite there really nothing to show for it so far other than writing rote code is a bit easier and still requires active baby sitting by someone who knows what they are doingby datatrashfire
4/19/2026 at 3:04:53 AM
They’re already far more useful than that, and I suspect harness engineering alone could add another OOM of productivity, without any underlying change in the models available today.by senordevnyc
4/18/2026 at 4:21:25 PM
You have to agree that it's totally possible that none of those things you are envisioning getting built out actually end up working as products, right?AI (LLM) progress would stop, and then everything people try to do with those last and most capable models would end up uninteresting or at least temporary. That's the world I'm calling a "dead end".
No matter how unlikely you think that is, you have to agree that it's at least possible, right?
by delusional
4/18/2026 at 4:43:00 PM
> then everything people try to do with those last and most capable models would end up uninterestingI believe that some of my made up examples won’t end up getting built, but my point is that there is _so much_ low hanging fruit like this.
Of course, anything is _possible_, but let’s talk likelihood.
In my forecast the possible worlds where progress stops and then the existing models don’t end up making anything interesting are almost exclusively scenarios like “Taiwan was invaded, TSMC fabs were destroyed, and somehow we deleted existing datacenters’ installed capacity too” or “neo-Luddites take over globally and ban GPUs”, all of this gives sub-1% likelihood.
You can imagine 5-10% likelihood worlds where the growth rate of new chips dramatically decreases for a decade due to a single black-swan event like Taiwan getting glassed, but that’s a temporary setback not a permanent blocker.
Again, I’m just looking at all the things that can obviously be built now, and just haven’t made it to the top of the list yet. I’m extremely confident that this todo list is already long enough that “this all fizzles to nothing” is basically excluded.
I think if model progress stops then everyone investing in ASI takes a big haircut, but the long-term stock market progression will look a lot like the internet after the dot com boom, ie the bloodbath ends up looking like a small blip in the rear view mirror.
I guess, a question for you - how do you think about coding agents? Don’t they already show AI is going to do more than “end up uninteresting”?
by theptip
4/20/2026 at 8:08:20 AM
Coding agents are interesting, but in my opinion also many worlds away from what they're being sold as. They can be helpful and a moderate efficiency gain, if you know where to use them and you're careful to not fall into one of their many traps where they end up being a massive cost and efficiency loss down the line. They're helpful tools, but they're slow, expensive, and unreliable -- in order of decreasing likelihood that that's going to change in a big way.I find it interesting that you chose the shopping list and fridge examples, because my view on the whole LLM hype is that 99% of it is a solution looking for a problem, and shopping and the fridge are historically such a commonly advertised area for technologies desparately looking for an actual use case. I don't think fridge content management and shopping plans are actual pain points in most people's lives. It's not something people would see a benefit in if they didn't have to do it manually. And it's an area with a very low tolerance for the systemic unreliability. The guy needed eggs to bake his cake, but the AI got him eggos instead -- et voilà, another person who thinks this whole "smart" technology is shit and won't deal with it anymore.
And so it goes with most AI use cases I've seen so far. In my view the only thing they're good at is fuzzy search. Coding agents are helpful, but in the end, their secret sauce it just that: fuzzy search.
Can fuzzy search be helpful? Yes, even very helpful! "Bigger than the Internet" helpful? I think not.
by Anamon
4/18/2026 at 4:53:05 PM
> Of course, anything is _possible_, but let’s talk likelihood.The problem with talking likelihood is that it's an interpretation game. I understand you think it's wholly unlikely that it all fizzles out, I could read that from your first post. I hope it's also clear that I do think it's likely.
That's the point where we have to just agree to disagree. We have no rapport. I have no reason to trust your judgment, and neither do you mine.
by delusional
4/18/2026 at 5:16:31 PM
I agree to disagree.However I do feel a lot of this comes down to facts about the world now, eg whether Claude Opus is doing anything interesting, which are in principle places where you could provide some evidence or ideas, along the lines of the detail that I gave you.
My read so far is you are just saying “maybe it fizzles out” which is not going to persuade anyone who disagrees. Sure, “maybe”, especially if you don’t put probabilities on anything; that statement is not falsifiable.
> The problem with talking likelihood is that it's an interpretation game
I am open to updating my model in response to a causal argument, if you care to give more detail. I view likelihoods as the only way to make these sorts of conversations concrete enough that anyone could hope to update each other’s model.
by theptip
4/18/2026 at 5:29:16 PM
Even if chatbot LLM's stop at their current capability, There's a whole ecosystem of scientific language models(in drug discovery, chemistry, materials design, etc), and engineering language models(software, chip design, etc) that are very valuable in their fields.And even if chatbot LLM's seem to be a dead end, them and other machine learning algo's will be happy to use the data centers to create/discover a lot of stuff.
by petra
4/18/2026 at 8:52:55 AM
e.g. the climate models that could be run on some of these systems would dwarve anything we’ve been able to do so far.by m_mueller
4/18/2026 at 9:31:10 AM
AI progress may fizzle out, but everything it produced so far would still be there. Models are just big bags of floats - once trained, they're around forever (well, at least until someone deletes them), same is true about harnesses they run in (it's just programs).But AI proliferation is not stopping soon, because we've not picked up even the low hanging fruits just yet. Again, even if no new SOTA models were to be trained after today, there's years if not decades of R&D work into how to best use the ones we have - how to harness the big ones, where to embed the small ones, and of course, more fundamental exploration of the latent spaces and how they formed, to inform information sciences, cognitive sciences, and perhaps even philosophy.
And if that runs out or there is an Anti AI Revolution, we can still run those weather models and route planners on the chips once occupied by LLMs - just don't tell the proles that those too are AI, or it's guillotine o'clock again.
by TeMPOraL
4/18/2026 at 4:17:28 PM
> there's years if not decades of R&D work into how to best use the ones we have - how to harness the big ones, where to embed the small ones, and of course, more fundamental exploration of the latent spaces and how they formed, to inform information sciences, cognitive sciences, and perhaps even philosophy.I think my sense of "dead end" would entail none of those directions panning out into anything interesting. You would "explore the latent spaces" only to find nothing of value. Embedding the LLM models wouldn't end up doing anything useful for whatever reason, and philosophy would continue on without any change.
by delusional
4/18/2026 at 8:48:07 AM
I think there is little chance it is a "dead end", it's here to stay but at least LLMs seem to have hit the diminishing returns curve already, despise what investors might think, and so far none of the big providers actually makes money for all that investmentby PunchyHamster
4/18/2026 at 2:10:25 PM
I think for many, if LLMs and AI only improves marginally in the next 5-10 years it is effectively a dead end. The capital expenditure necessitates AI does something exponentially more valuable than what it does now.I think we are saying the same thing.i just think the pull back on AI will be dramatic unless something amazing happens very soon.
by etempleton
4/18/2026 at 2:31:09 PM
I just don’t see it. Both professionally and personally I’m producing so much more now. Back burner projects that weren’t worth months of my time are easily worth a few hours and $20 or whatever.Why would I pull back?
by brookst
4/18/2026 at 2:40:54 PM
You're forgetting that the 20$ are not a sustainable price point. Would your backburner personal app thingy be worth 200$?by grandchild
4/18/2026 at 4:12:29 PM
This is thoroughly debunked at this point. The frontier labs are profitable on the tokens they serve. They are negative when you bake in the training costs for the next generation.by theptip
4/18/2026 at 3:34:03 PM
Yes, I have often paid more than that to have someone else develop a personal side project.by brookst
4/18/2026 at 4:03:47 PM
Compute capacity for the same workload always gets cheaper over time.by nradov
4/18/2026 at 4:09:41 PM
No piece of compute capacity (in the form of equipment) I have bought this year has been in any way cheaper than last year.by jclulow
4/18/2026 at 5:45:35 PM
So what. Fluctuations over a year or two are meaningless. Do you really believe that the constant-dollar price of an LLM token will be higher in 20 years?by nradov
4/19/2026 at 3:46:39 AM
I can see a world where energy costs rise at a rate faster than overall inflation, or are a leading indicator. In that scenario then yes I could see LLM token costs going up.by edmundsauto
4/18/2026 at 3:25:05 PM
I’ve wondered about this too.Perhaps if we used something exotic like solid gold cookware, there might be some amazing benefits that people would love.
But it would be far from practical without being wildly subsidized…
With AI, it feels too much like the “grownups” are acting worse than the kids…
by BobbyTables2
4/18/2026 at 3:44:15 PM
You’re probably already experienced at your job and using AI to enhance that, or at least using that experience to keep the AI results clean. That’s something you or a company would want to pay for but it has to be a lot more than today’s prices to make it profitable. Companies want to get more out of you, or get a better price/performance ratio (an AI that delivers cheaper than the equivalent human).But current gen AIs are like eternal juniors, never quite ready to operate independently, never learning to become the expert that you are, they are practically frozen in time to the capabilities gained during training. Yet these LLMs replaced the first few rungs of the ladder so human juniors have a canyon to jump if they want the same progression you had. I’m seeing inexperienced people just using AI like a magic 8 ball. “The AI said whatever”. [0] LLMs are smart and cheap enough to undercut human juniors, especially in the hands of a senior. But they’re too dumb to ever become a senior. Where’s the big money in that? What company wants to pay for the “eternal juniors” workforce and whatever they save on payroll goes to procuring external seniors which they’re no longer producing internally?
So I’m not too sure a generation of people who have to compete against the LLMs from day 1 will really be producing “so much more” of value later on. Maybe a select few will. Without a big jump in model quality we might see “always junior” LLMs without seniors to enhance. This is not sustainable.
And you enhancing your carpentry skills for your free time isn’t what pays for the datacenters and some CEO’s fat paycheck.
[0] I hire trainees/interns every year, and pore through hundreds of CVs and interviews for this. The quality of a significant portion of them has gone way down in the past years, coinciding with LLMs gaining popularity.
by close04
4/18/2026 at 2:45:57 PM
Lol are people like you going to be enough to support the large revenues? Nope.A firm that see's rising operating expenses but no not enough increase in revenue will start to cut back on spending on LLMs and become very frugal (e.g. rationing).
by wr2
4/18/2026 at 3:34:29 PM
Before they cut back on human programmers?by brookst
4/18/2026 at 10:52:15 AM
when ai is dead we can use all those gpus for zucc's metaverse xD sby lukewarm707
4/18/2026 at 3:04:35 AM
The shovels and labour used to make those things where not depreciated.The GPUs are the shovels, not the project. AI at any capability will retain that capbibilty forever. It only gets reduced in value by superior developments. Which are built upon technologies that the previous generation developed.
by Lerc
4/18/2026 at 5:53:03 AM
Calling the GPUs the shovels is bonkers because a) shovels are cheap, GPUs are not. And b) when you build a bridge the bridge doesn’t need shovels to be passable. Without GPUs, the datacenter is useless, the model is useless, etc.If anything, the GPUs are the steel that the bridge is made of. Each beam can be replaced, but if too many fail the bridge is impassible. A bridge with a 6 year lifespan for each beam is insane.
by kennywinker
4/18/2026 at 6:55:56 AM
You’re taking the metaphor way too literally. The people who made the most profit weren’t literally selling shovels, they were the ones providing logistics and support services to the gold miners, like hauling tons of equipment over tens of miles of mountain or providing the sales channel for the gold. They siphoned off most of the profit from the ventures that depended on them (like LLMs depend on GPUs) because the miners had no other choice, to the point where even the most productive mines often weren’t profitable at all.A less literal example is the conquistadors: their shovels were ships, horses, gunpowder, and steel. You can look at Spanish records from the Council of the Indies archive and any time treasures were discovered, the price of each skyrocketed to the point where only the wealthiest hidalgos and their patrons could afford to go on such adventures. I.e. the cost of a ship capable of a cross Atlantic voyage going from 100k pieces of eight to over a million in the span of only a few years (predating the treasure fleet inflation!)
Gold rushes create demand shocks, and anyone who is a supplier to that demand makes bank, regardless of whether its GPUs or “shovels”.
by throwup238
4/18/2026 at 9:35:19 AM
> You can look at Spanish records from the Council of the Indies archive and any time treasures were discovered, the price of each skyrocketed to the point where only the wealthiest hidalgos and their patrons could afford to go on such adventures.Today this is real estate. And it's something people keep forgetting when arguing that ${whatever breakthrough or just more competition} will make ${some good or service} cheaper for consumers: prices of other things elsewhere will raise to compensate and consume any average surplus. Money left on the table doesn't stay there for long.
by TeMPOraL
4/18/2026 at 7:02:20 AM
GPUs don't really have six year lifespans, though. The hardware itself lasts far longer than that, even hardware that's been used for cryptomining in terrible makeshift setups is absolutely fine for reuse.by zozbot234
4/20/2026 at 1:06:58 PM
GPUs in your average home PC has a longer lifespan. Datacenters run them at full load for very long periods of time. Some datacenters literally burn through hundreds of GPUs a day.by vrighter
4/18/2026 at 1:47:08 PM
Each of these GPUs pull up to a kilowatt of power. The average commercial power cost is 13.4 ¢/kWh. That means running a single H100 full tilt 24/7 is a power operationing cost of $1,100 per card per year.In three years the current generation of GPUs will be 50% or more faster. In six years your talking more than 100% faster. For the same energy costs.
If you're running a GPU data center on six year old GPUs, your cost to operate per sellable unit of work is double the cost of a competitor.
by malfist
4/18/2026 at 3:02:41 PM
One thing I am not entirely sure if there will be huge efficiency gains. Just looking at TDP that is the power consumption of say 3090 and 5090 and the increase is substantial then compare it to performance and the performance lift stops looking that great...by Ekaros
4/18/2026 at 3:50:03 PM
3x increase in compute for a 1.5x increase in tdp is pretty good considering the underlying process had barely changed. In anycase, consumer GPUs aren't a good metric as they operate with different economic constraints.H100 to GB200 saw a 50x increase in efficiency, for example.
by xyhopguy
4/18/2026 at 3:52:11 PM
https://www.nvidia.com/en-us/data-center/gb200-nvl72/Nvidia only advertises 25x efficiency. And that is their word...
by Ekaros
4/18/2026 at 2:33:18 PM
Sure. But if that fully depreciates, $1100/year GPU produces $20k of economic benefit, would you decommission it as long as there is demand?by brookst
4/18/2026 at 3:40:19 PM
If my data center sells a pflop at $5 because of our electricity use and the data center a state over with newer GPUs sells it at $2.50/pflop, it doesn't matter how much economic benefit it generates, my customers are all going to the data center a state over.by malfist
4/18/2026 at 2:49:03 PM
I want to see math on how a single GPU will pull down that much revenue, because that seems like a dubious outcome.by voakbasda
4/18/2026 at 3:32:48 PM
Fair, I was hand waving to make a point. “If it generates more than $1100 + (resale price * WACC) + opportunity cost from physical space/etc” would have been more accurate.But the point is — you don’t decommission profit generators just because a competitor has a lower cost structure. You run things until it is more profitable for you to decommission them.
by brookst
4/18/2026 at 3:41:43 PM
That all depends on if you're running your own hardware (unlikely) or renting.by malfist
4/18/2026 at 8:49:11 AM
In context of datacenter using AI workloads, it's cheaper to replace them after few years with faster, more energy efficient ones, because the power cost is major factorby PunchyHamster
4/18/2026 at 12:54:26 PM
> A bridge with a 6 year lifespan for each beam is insane.Not necessarily. Depends entirely on the value of the transport that the bridge enables.
by naasking
4/18/2026 at 3:11:48 AM
> retain that capbibilty foreverNot really. The base training data cutoff will quickly render models useless as they fail to keep up with developments.
Translating some Farsi news articles about the war was hilarious, Gemini Pro got into a panic. ChatGPT either accused me of spreading fake news, or assumed this was some sort of fantasy scenario.
by jiggawatts
4/18/2026 at 4:09:59 PM
Karpathy - and others - consider the pre-training knowledge as much a liability as an asset. If we could just retain the emergent reasoning and language capability without the hazy recollections the models would likely be stronger.by jeremyjh
4/18/2026 at 6:58:12 AM
That's GPT4 thinking. New models use tools to look at current events or latest versions, and rely very little on weight knowledge.by m00x
4/18/2026 at 7:06:46 AM
You can pull new information into the context via RAG, but that is expensive and only gives very shallow understanding compared to retraining.by zozbot234
4/18/2026 at 4:51:16 AM
Not really.For coding I care mostly about reasoning ability which is uncorrelated with cut off
by nl
4/18/2026 at 3:32:12 AM
You need to separate training and inference usage of GPUs for this analysis.by loandbehold
4/18/2026 at 5:11:19 AM
"Inference consumes 60–90% of total AI lifecycle costs." So shovel is not the right analogy, more like GPU = coal burning engine. And yes, coal was a big railroad expense, more so than financing construction debt.by Mathnerd314
4/18/2026 at 10:37:18 AM
Only half of the rail capacity that existed during the railroad boom times was still in use by the 1970s. Lots of it was never really used at all after various railroads went bankrupt. But your point still stands.That said, I'm pretty sure in a compute-hungry AI world you aren't going to retire GPUs every 6 years anymore. Even if compute capacity jumps such that current H100s only represent 10% of total compute available in 6 years, you're still running those H100s until they turn to dust.
I just think it's hard to compare localized railroad infrastructure to globalized AI capacity and say one was more rational than the other on a % of GDP basis until the history actually plays out.
If you compare global investment in nuclear weapons it would dwarf the manhattan project and AI thus far, and yet, 99.99999% of nuclear weapons investment is just "wasted" capacity in that it has never been "used." But the value it has created in other ways (MAD-enabled peace) has surely been profitable on net. Nobody would have predicted this at the time.
Playing armchair internet pessimist about the "new thing" always makes you feel smart but is usually not a good idea since you always mis-price what you don't know about the future (which is almost everything).
by pembrook
4/18/2026 at 5:27:17 PM
That's definitely true for some of them, but for others it's not so clear, like the Apollo or Manhattan projects? Those of course also have lasting impact but it's more in terms of knowledge, which at least arguably we are also accruing with these data centers.by phreeza
4/18/2026 at 6:37:43 PM
Not just knowledge.RS-25 - It was designed as HG-3 during the 60s for Saturn-V and manufactured for the Space Shuttle and refurbished for SLS and just launched last month.
Vehicle assembly building - Built for Saturn-V launches been in active use and continues today .
Crawler-transporters - Hanz and Franz were built in 1966 for Apollo and still used for launches.
There are plenty of other examples from Apollo program of actual hardware being repurposed and used for later missions.
In other mega space projects, Hubble is still doing active research, 35 years after launch, voyager is sending data close to 50 years later.
It is a whole another topic whether they should be used, how NASA is funded , and this is why makes programs like SLS or the shuttle are so expensive and so forth.
The point is these mega projects had a long lifetime of value, albeit with higher maintenance costs for the tech heavy ones like Apollo than say a bridge or a dam does.
by manquer
4/18/2026 at 3:05:37 PM
I think there's more nuance to it. The real asset is the models that are being created.Imagine this world: the bubble "pops" in a couple years. The GPUs stick around for a few more years after that. At the end, we pretty much don't train new foundation models anymore - no one wants to spend the money on the hardware needed to make a real advance.
People continue to refine, distill, and optimize the existing foundation models for the next century or two, just like people keep laying new track over old railway right of ways.
by elil17
4/18/2026 at 2:27:52 PM
I’m not sure tax depreciation rates are the best measure here. Those GPUs will be used for much longer than 6 years, and the returns from the businesses will be an order of magnitude longer.by brookst
4/18/2026 at 2:53:32 PM
The jury is still out on this. Those tax based deprecation schedules are largely a relic of traditional data centers, where workloads are fairly moderate compared to AI use cases. Additionally, power and rack space constraints can complicate things quite a bit. If next gen chips are significantly more efficient and you are currently constrained by power availability, you might pull your old servers and replace them with the newer ones regardless of how much useful life you have left.by vmbm
4/18/2026 at 5:52:47 PM
Azure ran K-80/P-100 fleets a bit longer for 8-9 years . Google does 9 years for TPUs .In the current generation There are plenty of questions around
- viability of training to inference cascades (the key to extended life) given custom ASICs hitting production like cerebras did early this year.
- energy efficiency of older chips in tight energy environments , just new grid capacity constraints favor running newer efficient chips ignoring perhaps short term(< 1 year) price shock due to war.
- higher MBTF , compared to older GPUs modern nodes are 8 GPU clusters built on 2/3 nm processors depending on HBM memory, the tolerances are much lower especially for training.
- new DCs being spun up are being by up less than ideal conditions due to permitting, part supply and other constraints which will impact operating environment.
Not withstanding, all these issues and even taking a generous 10 year useful life . The expenses dwarf every mega project before it .
by manquer
4/18/2026 at 8:39:40 PM
> Those GPUs will be used for much longer than 6 yearsWill it be worth the cost of electricity to run them if the flops/watt of newer chips is lower?
by xnx
4/20/2026 at 2:45:48 AM
If demand is less than supply, definitely not.If every latest-gen is booked solid and there is still unmet demand, why would you decommission?
by brookst
4/18/2026 at 2:34:13 PM
actually the physical lifetime (not financial depreciation) for AI data center GPUs is even lower (3 to 4 years)by mfuzzey
4/18/2026 at 3:36:35 PM
Like, they break? Or it just becomes more profitable for the data center to replace them?by brookst
4/18/2026 at 5:44:01 PM
It will become more expensive to fix than replace. Also more energy intensive than newer generation to operate. MBTF is significant the older the fleet gets higher the failure rates .A typical node today is 8 GPU node today , you have to keep replacing failed GPUs by cannibalizing parts from other GPUs as nobody is selling new GPUs of that model anymore at higher frequencies.
In addition to outright failure there are higher error rates in computation in graphics it tends to be flickers or screen artifacts and so on.
Azure operated K-80s and P-100s for 9 and 7 years respectively but they were running at 2 GPU nodes and of course were much simpler compared to today’s HBM behomouths on 2/5 nm processor nodes . Google operates their custom ASIC TPUs for about 8-9 years .
With custom inference ASICs like cerebras hitting production the cascading of training NVIDIA chips to inference to get the 5-6 year useful life is also not clear.
by manquer
4/18/2026 at 12:13:15 AM
Also railways would always have alternative uses at that time - e.g. logistics in warfare.What other uses do GPU's have that are critical...? lol
In addition to your points, this is why I always laugh when people do backward comparisons. What characteristics do they share in common? Very little.
by wr2
4/18/2026 at 1:22:53 AM
GPUs do have a use in warfare though. I mean, LLMs are basically offensive weapons disguised as software engineers.Sure, LLMs can kind of put together a prototype of some CRUD app, so long as it doesn’t need to be maintainable, understandable, innovative or secure. But they excel at persisting until some arbitrary well defined condition is met, and it appears to be the case that “you gain entry to system X” works well as one of those conditions.
Given the amount of industrial infrastructure connected to the internet, and the ways in which it can break, LLMs are at some point going to be used as weapons. And it seems likely that they’ll be rather effective.
FWIW, people first saw TNT as a way to dye things yellow, and then as a mining tool. So LLMs starting out as chatbots and then being seen as (bad) software engineers does put them in good company.
by jamesknelson
4/18/2026 at 2:32:39 PM
Imagine comparing something that has a useful life of 100+ years vs a thing that is worn out, much less durable, and needs replacing much more often and can become obselete from innovation within its own product category.Comical. China can continue innovating on GPUs and all this existing spend to stock up on compute is a waste. Again, comical. Moreover China has energy capacity that the US does not. Meaning all those GPU's that deliver less performance per watt? Yep going in the bin.
So yeah.. carry on telling me how this is going to yield some supreme advantage lmao.
by wr2
4/18/2026 at 2:26:31 AM
> GPUs do have a use in warfare though.Unclassified public cloud GPUs are completely useless when your warfighting workloads are at the SECRET level or above.
by bigfatkitten
4/18/2026 at 2:42:21 AM
They’re unclassified public cloud GPUs today, much the same as the massive industrial base of the United States was churning out harmless consumer widgets in 1939. Those widget makers happened to be reconfigurable into weapon makers, and so wartime production exploded from 2% to 40% of GDP in 5 years [1]. But the total industrial output of course didn’t expand by nearly that much.I think it’s maybe plausible that private compute feels similar in the next do-or-die global war.
[1] https://eh.net/encyclopedia/the-american-economy-during-worl...
by jhide
4/18/2026 at 6:21:14 AM
The United States has almost no domestic capability to produce advanced semiconductors. There is no abundance of industrial capacity cranking out GPUs that can be quickly diverted from AI companies into weapon systems.Even if private compute was at a level of maturity where you could use it for classified workloads, knowing that the infrastructure is being managed by someone in India or China, securely getting data into and out of that infrastructure is still a mostly unsolvable problem.
by bigfatkitten
4/18/2026 at 5:33:12 PM
My point is the existing private DCs can be reconfigured for a different use. Building new gpus is not required to on-shore compute. We already have it. Obviously if the military started contracting out compute onto the hyperscalar clusters it would involve a host of changes. I wasn’t aware that they were letting India and China manage their infrastructure… That seems exceedingly unlikely? That relationship would obviously be severed if the compute was reconfigured for the military.by jhide
4/18/2026 at 12:03:57 PM
The US is one of the very few countries with the ability to produce advanced semiconductors.by ungreased0675
4/18/2026 at 2:14:34 PM
US is probably second only to Taiwan in terms of capacity to build advanced semiconductors and the gap is now closing as Intel gets back on track.by etempleton
4/18/2026 at 11:01:30 AM
wut? Intel with 18A can do itby tester756
4/18/2026 at 11:15:33 AM
Its low yields and tiny volumes are part of what gets the US from “no capacity” to “almost no capacity.”by bigfatkitten
4/18/2026 at 1:00:35 PM
yields are constantly improving on monthly basis, according to executives around 7% per month, so the capability is definitely there, but yields still needs some timeby tester756
4/18/2026 at 12:38:46 PM
[dead]by falsemyrmidon
4/18/2026 at 2:55:53 AM
On the topic of warfare, wars are fought differently now. Compute will be mentioned in the same breath as total manufacturing output if a global war between superpowers erupts. In highly competitive industries this is already the case. Compute will be part of industrial mobilization in the same way that physical manufacturing or transportation capacity were mobilized in WWII. I’m not an expert on military computing but my intuition is that FLOPS are probably even more easily fungible into wartime compute than widget makers, and the US was able to go widgets->weapons on an unbelievable scale last time.by jhide
4/18/2026 at 8:12:33 AM
There are plenty of military uses for computing, but I also find it hard to believe anything but a handful of datacenters are or could be a major factor in anything but a completely 1 sided war. They are very vulnerable targets that are easy to locate and require large amounts of power and cooling. I also just don't see the application, encryption capabilities far exceed the compute available needed for decryption and computing precision and speed with even 20 year old tech far exceeds the precision of anything you would want to control. Even with tangible banefits, say 10% more or less casualties than there would be otherwise, in an exchange with anything resembling a peer military force im not sure it matters because everybody already loses.by AngryData
4/18/2026 at 11:48:11 AM
Is that in terms of data centres or chips on the battlefield? Surely the latter is most important. Or will war alwys have perfect connectivity.by 7952
4/18/2026 at 5:18:40 AM
You could argue that compute was a decisive factor in World War II even (used in code breaking and designing nuclear weapons).by andrewljohnson
4/18/2026 at 12:55:32 PM
> What other uses do GPU's have that are critical...? lolGPUs are essential to every kind of scientific and engineering simulation you can think of. AI-accelerated simulations are a huge deal now.
by naasking
4/18/2026 at 2:35:17 PM
GPUs that have lives of..?Now compare that with the life a rail road. Amusing.
by wr2
4/18/2026 at 2:58:36 PM
Some of those railroad bridges might never have been constructed without those simulations.by naasking
4/18/2026 at 2:18:52 AM
Great point!by rayiner
4/17/2026 at 6:13:42 PM
This seems to show the railroads peaking around 9% of GDP. While that's lower than some of the other unsourced numbers I've seen, it's much higher than the numbers I was able to find support for myself athttps://news.ycombinator.com/item?id=44805979
The modern concept of GDP didn't exist back then, so all these numbers are calculated in retrospect with a lot of wiggle room. It feels like there's incentive now to report the highest possible number for the railroads, since that's the only thing that makes the datacenter investment look precedented by comparison.
by tripletao
4/17/2026 at 5:09:16 PM
But doesn't that overstate it in the other direction? Talking about investments in proportion to GDP back when any estimate of GDP probably wasn't a good measure of total economic output?We're talking about the period before modern finance, before income taxes, back when most labor was agricultural... Did the average person shoulder the cost of railroads more than the average taxpayer today is shouldering the cost of F-35? (That's another line in Paul's post.)
by chromacity
4/17/2026 at 6:19:38 PM
The F-35 case is interesting. Lockheed Martin can, given peak rates seen in 2025, produce a new F-35 approximately every 36 hours, as they fill orders for US allies arming themselves with F-35's. US pilot training facilities are brimming with foreign pilots. It's the most successful export fighter since the F-16 and F-4, and presently the only means US allies have to obtain operational stealth combat technology.What that means for the US is this: if the US had to fight a conventional war with a near-peer military today, the US actually has the ability to replace stealth fighter losses. The program isn't some near-dormant, low-rate production deal that would take a year or more to ramp up: it's a operating line at full rate production that could conceivably build a US Navy squadron every ~15 days, plus a complete training and global logistics system, all on the front burner.
If there is any truth to Gen Bradley's "Amateurs talk strategy, professionals talk logistics" line, the F-35 is a major win for the US.
by topspin
4/17/2026 at 6:51:45 PM
> Lockheed Martin can, given peak rates seen in 2025, produce a new F-35 approximately every 36 hours ... it's a operating line at full rate production that could conceivably build a US Navy squadron every ~15 days, plus a complete logistics and training system, all on the front burner.That's amazing. I had no idea the US was still capable of things like that.
I wonder if there's a way to get close to that, for things that aren't new and don't have a lot of active orders. Like have all the equipment setup but idle at some facility, keep an assembly teams ready and trained, then cycle through each weapon an activate a couple of these dormant manufacturing programs (at random!) every year, almost as a drill. So there's the capability to spin up, say F-22 production quickly when needed.
Obviously it'd cost money. But it also costs a lot of money to have fighter jets when you're not actively fighting a way. Seems like manufacturing readiness would something an effective military would be smart to pay for.
by palmotea
4/17/2026 at 7:20:12 PM
"I had no idea the US was still capable of things like that."It's more than just the US though. It's the demand from foreign customers that makes it possible. It's the careful balance between cost and capability that was achieved by the US and allies when it was designed.
Without those things, the program would peter out after the US filled its own demand, and allies went looking for cheaper solutions. The F-35 isn't exactly cheap, but allies can see the capability justifies the cost. Now, there are so many of them in operation that, even after the bulk of orders are filled in the years to come, attrition and upgrades will keep the line operating and healthy at some level, which fulfills the goal you have in mind.
Meanwhile, the F-35 equipped militaries of the Western world are trained to similar standards, operating similar and compatible equipment, and sharing the logistics burden. In actual conflict, those features are invaluable.
There are few peacetime US developed weapons programs with such a record. It seems the interval between them is 20-30 years.
by topspin
4/17/2026 at 8:12:52 PM
Now let's talk about the 155mm artillery shellsby rickydroll
4/18/2026 at 12:25:19 AM
I think people were surprised to suddenly have a lot of demand for those.by tim333
4/17/2026 at 9:31:28 PM
Sure. Heavy industry. It's important. Maybe don't send it all to Asia because it's dirtier than software and finance.by topspin
4/18/2026 at 3:34:37 PM
It took a while to reach full production rate for the F-35. Partly because the supply chain (mostly US based because of the Buy American Act) had to come up to speed[0]. But also because there were running-changes being made to the plane, necessitating changes to the production line to accommodate them.The F-22 production tooling is supposedly in storage at Sierra Army Depot. Why there and not at the boneyard at Davis-Monthan is an interesting question[1]. Spooling production of the F-22 back up will take less time than originally, but still won't be quick (a secure factory floor large enough has to be found, workforce knowledge has been lost, adding upgrades, etc.)
[0] Scattered across as many congressional districts as possible.
[1] I was at Sierra in the 80's on TDY and it was all Army and Army civilians. A USAF guy like me really stood out.
by chiph
4/17/2026 at 10:18:45 PM
We do—our automotive assembly lines. F-22 is more of a deterrent. If we need more, it’s failed.by peyton
4/18/2026 at 1:01:16 AM
> Lockheed Martin can, given peak rates seen in 2025, produce a new F-35 approximately every 36 hoursUntil we run out of materials
https://mwi.westpoint.edu/minerals-magnets-and-military-capa...
by bluedino
4/17/2026 at 5:13:25 PM
That's the problem with going too far using "money" or "GDP" - you can roughly compare the WWII 45% of GDP spent with today - https://www.davemanuel.com/us-defense-spending-history-milit... because even by WWII much was "financialized" in such a way that it appears on GDP (though things like victory gardens, barter, etc would explicitly NOT be included without effort - maybe they do this?).As you get further and further into the past you have to start trying to measure it using human labor equivalents or similar. For example, what was the cost of a Great Pyramid? How does the cost change if you consider the theory that it was somewhat of a "make work" project to keep a mainly agricultural society employed during the "down months" and prevent starvation via centrally managed granaries?
by bombcar
4/17/2026 at 5:46:29 PM
You don't even need to go that far back to run into issues, when I read Pride and Prejudice, I think Mr. Darcy was one of the richest people in England at around £10,000/year, but if you to calculate his wealth in today's terms it wasn't some outrageous sum (Wikipedia is telling me ~£800,000/year). The thing is that the economy was totally different back then -- labor cost practically nothing, but goods like furniture for instance were really expensive and would be handed down for generations.With £800K today, you may not even be able to afford the annual maintenance for his mansion and grounds. I knew somebody with a biggish yard in a small town and the garden was ~$40K/yr to maintain. Definitely not a Darcy estate either.
Thinking about it, an income of £800K is something like the interest on £10m.
by helterskelter
4/18/2026 at 3:56:19 AM
£10,000 per year for Mr Darcy is 10,000 gold sovereigns per year. A gold sovereign at spot price today is about $1,100. So that’s over 10 million dollars per year in gold-equivalent wealth. Plenty to maintain his estate with.Alternatively, £10,000 is 200,000 sterling silver shillings per year (20 shillings per pound) for him. A sterling shilling today is about $13.50 at spot price. So that’s $2.7million per year in silver-equivalent wealth. Still plenty!
by benl
4/17/2026 at 10:09:06 PM
Newsflash, old antique furniture from around that time is still really expensive even today. It was a hand-crafted specialty product, not run-of-the-mill IKEA stuff. If you compare the prices of single consumer goods while adjusting for inflation, they generally check out at least wrt. the overall ballpark. The difference is that living standards (and real incomes) back then for the average person were a lot lower.by zozbot234
4/18/2026 at 7:11:27 AM
Inflation is by definition the change in prices of a general basket of goods. Some things will outrun the basket and some things will underrun it. In general consumer durables have underrun, things like TVs and yes, sofas, are way way cheaper now than ever before. I'm not really sure why you would exclude IKEA type furniture, in most cases it's probably as good or better than a really old hand crafted one. If back then you needed to get an ultra luxury sofa but now you can get an IKEA one for the same general quality then that's a massive win for affordability even if the ultra luxury category still exists.by Anon1096
4/17/2026 at 7:10:45 PM
~£800,000/year when compared to median value in current UK? Outrageous is relative sure, but for most people out there it should be no surprise they would feel that as an outrageously odd distribution of wealth.by psychoslave
4/17/2026 at 8:22:05 PM
The point is that ~£800,000/year is high, even possibly "very high" but it is not "most wealthy man in Britain" high, and certainly nowhere near "hire as many people as worked for Darcy".by bombcar
4/17/2026 at 10:38:03 PM
Its more like making 800k per year today in India, where a lot of people make much less so you can have servantsby cm2012
4/17/2026 at 6:24:19 PM
The big change is the end of any sort of backing in money. The Minneapolis Fed calculated consumer price index levels since 1800 here. [1] Of course that comes with all the asterisks we're speaking of here for data going back that far, but their numbers are probably at least quite reasonable. They found that from 1800 to 1950 the CPI never shifted more than 25 points from the starting base of 51, so it always stayed within +/- ~50% of that baseline. That's through the Civil War, both World Wars, Spanish Flu, and much more.Then from 1971 (when the USD became completely unbacked) to present, it increased by more than 800 points, 1600% more than our baseline. And it's only increasing faster now. So the state of modern economics makes it completely incomparable to the past, because there's no precedent for what we're doing. But if you go back to just a bit before 1970, the economy would have of course grown much larger than it was in the past but still have been vaguely comparable to the past centuries.
And I always find it paradoxical. In basic economic terms we should all have much more, but when you look at the things that people could afford on a basic salary, that does not seem to be the case. Somebody in the 50s going to college, picking up a used car, and then having enough money squirreled away to afford the downpayment on their first home -- all on the back of a part time job was a thing. It sounds like make-believe but it's real, and certainly a big part of the reason boomers were so out of touch with economic realities. Now a days a part time job wouldn't even be able to cover tuition, which makes one wonder how it could be that labor cost practically nothing in the past, as you said. Which I'm not disputing - just pointing out the paradox.
https://www.minneapolisfed.org/about-us/monetary-policy/infl...
by somenameforme
4/18/2026 at 12:06:34 AM
And yet the homeownership rate in 1950 was 53% (an all-time high up to that point) compared to 65% today: https://www.huduser.gov/portal/sites/default/files/pdf/Housi... Only 80% of units had private indoor toilets or showers.It is notable that the median monthly rent was $35/month on a median income of $3000, so ~15% of income spent on rental housing. But it's interesting reading that report because a significant focus was on the overcrowding "problem". Housing was categorized by number of rooms, not number of bedrooms. The median number of rooms was 4, and the median number of occupants >4 per unit (or more than 1 person per room). I don't think it's a stretch to say that the amount of space and facilities you get for your money today is roughly equivalent. Yes, greater percentage of your income goes to housing, and yet we have far more creature comforts today then back in 1950--multiple TVs, cellphones, appliances, and endless amounts of other junk. We can buy many more goods (durable and non-durable) for a much lower percentage of our income.
There's no simple story here.
by wahern
4/18/2026 at 2:53:34 PM
What an interesting paper you found! Home ownership stats in contemporary times are quite misleading because of debt. Most home owners now are still paying rent in the form of a mortgage to a bank. In the 50s most home owners genuinely owned their homes 'free and clear'. The exact rate was 56% in the 1951 per your paper (which was a local low), and now it's at 40% which is a local high. And the contemporary demographics are all messed up - it's largely driven by older to elderly individuals in non-urban low-income states.As for number of occupants, the 50s had a sustainable fertility rate. That means, on average, every woman was having at least 2 kiddos. So a median 4 occupant house would be husband, wife, and 2 children living in a place with a master bedroom, kids room, a combined kitchen/dining room, and a living room. Bathrooms, oddly enough, did not count as rooms. So in modern parlance it'd mostly be a 2/2 for up to 14% of one person's median income, and 0% in most cases as most people 'really' owned their homes.
We definitely have lots more gizmos, but I feel like that's an exchange that relatively few people would make in hindsight.
by somenameforme
4/18/2026 at 8:00:20 PM
I sometimes feel that the facts are all out there, but half the people pick one half the facts as causal and the other half pick the other half. Are home prices rising because people have fewer kids (and therefore more to spend on housing) or are people having fewer kids because house prices are rising (and therefore less to spend on kids)?I suspect that it's a complex mixture of all possibilities, and you can only really look at trends and your own life - the one thing you can have something resembling understanding and control.
by bombcar
4/20/2026 at 7:58:14 AM
I have a different take on it altogether. Religion previously worked as a sort of philosophical compass for life. It provided meaning and purpose, but as society has largely left religion behind, this left a void that was never really cleanly filled. So then in a post-religion society, what becomes the purpose in life? And I think for many, they simply have grabbed the lowest hanging fruit, even if subconsciously - wealth and materialism. And in this worldview there's not much room for children.I think my little hypothesis here works to cleanly explain fertility crises much better than any other alternative. For instance the typical income:fertility hypothesis or education:fertility hypothesis both have endless glaring counter-examples like Thailand where the society is relatively poor with relatively low education, yet has a fertility rate now lower than even Japan.
It also explains the paradox of upper middle class couples claiming that they aren't having children because they don't have enough money, while lower income couples continue to have relatively healthy fertility rates, and it's for the same reason that extremely high income couples have relatively healthy fertility rates. Extremes of high and low income largely exclude one from materialism simply because there's no carrot to chase, whether because you can have it at any time you want, or simply because it's so far away that there's no hope of ever getting closer to it.
by somenameforme
4/18/2026 at 9:12:30 PM
> Are home prices rising because people have fewer kids (and therefore more to spend on housing) or are people having fewer kids because house prices are rising (and therefore less to spend on kids)?Maybe a false dichotomy? My suspicion is that home prices rise because more credit becomes available (and not only homes prices but the price of other assets). If you think about it in broader terms this explains what happens to the fruits of our increased productivity - lenders extend more credit as productivity rises thereby claiming the benefit for themselves. The working person is still stuck with a 40 hour week because despite being more productive they have more debt to service.
by tmnvix
4/18/2026 at 10:04:27 PM
There's something there, definitely - reading "ordinary man's guide to the financial life" from different eras is informative; many of the older ones work really hard to convince you that a home loan is something worth getting and "you'll pay it off faster than you think" - now we have guides talking about "good debt" and "never pay it off".by bombcar
4/17/2026 at 5:13:33 PM
I posted just that on the Twitter feed but then I realized that railroad started at the beginning of an industrial revolution where labor was a far larger portion of GDP compared to industrial production. So it kind of makes sense that the first enabling technology consumed far more GDP than current investments do, even on a marginal basis.by chaos_emergent
4/17/2026 at 11:16:56 PM
> Makes it a little less dramatic. But also shows what a big *'n deal the railroads were!It also makes it more dramatic, consider the programs on the list and what they have in common.
* The Apollo program. A government-funded science project. No return on investment required.
* The Manhattan Project. A government-funded military project. No return on investment required.
* The F-35 program. A government funded military project. No return on investment required.
* The ISS. A government funded science project. No return on investment required.
* The Interstate Highway System. A government funded infrastructure project. No return on investment required.
* The Marshall Plan. A government funded foreign policy project. No return on investment required.
The actual return on investment for these projects is in the very long term of decades; Economic development, national security, scientific progress that benefits the entire country if not the entire world.
Consider the Marshall Plan in particular. It's a massive money sink, but it's nature as a government project meant it could run at losses without significant economic risk and could aim for extremely long term benefits. It's been paying dividends until January last year; 77 years.
And that dividend wasn't always obvious; Goodwill from Europe towards the US is what has prevented Europe from taking similar actions as China around the US' Big Tech companies. Many of whom relied extensively on 'Dumping' to push European competitors out of business, a more hostile Europe would've taken much more protectionist measures and ended up much like China, with it's own crop of tech giants.
And then there's the two programs left out. The railroads and AI datacenters. Private enterprise that simply does not have the luxury of sitting on it's ass waiting for benefits to materialize 50 years later.
As many other comments in this thread have already pointed out: When the US & European railroad bubbles failed, massive economic trouble followed.
OpenAI's need for (partial) return on investment is as short as this year or their IPO risks failure. And if they don't, similar massive economic trouble is assured.
by SlinkyOnStairs
4/18/2026 at 9:02:12 AM
European railroad bubble failed?Can you explain that? I really have no idea what you are referring to?
by herbst
4/18/2026 at 10:37:23 AM
The search term is the "Railway Mania", which predominant describes the UK's railroad bubble, with smaller similar booms on mainland europe. (You will have to look up French and German sources for the best info on those)The bubble failed in the sense that massive commitments for new railways were made, and then the 1847 economic crisis caused investment to dry up, which collapsed the bubble and put a halt to the railroad construction boom. Those railway commitments never materialized, and stock market crashes followed.
I'm also being a little cheeky with what "massive economic trouble" entails; While the stock market was heavy on railroads and crashed right into a recession, the world in the mid-1800s was much less financialized so the consequences in absolute terms were less pronounced than a similar bubble-collapse would be today. As such, the main historical comparison is structural.
(Similarly, the AI bubble is likely to burst "by itself" unless OpenAI's IPO is truly catastrophically bad. What's more likely is that a recession happens and then the recession triggers a stock market collapse, which then intensify eachother. And so these historical examples of similar situations may prove illustrative.)
by SlinkyOnStairs
4/18/2026 at 3:16:05 PM
This is actually an interesting piece of history I haven't heard about. Thanks for the pointersby herbst
4/18/2026 at 9:39:33 PM
> and then the 1847 economic crisis ... the world in the mid-1800s was much less financialized so the consequences in absolute terms were less pronounced than a similar bubble-collapse would be today.And yet 1848 was a very interesting year! Revolutionary even.
by tmnvix
4/17/2026 at 11:20:30 PM
You're actually arguing those highly technical engineering projects provided nothing to humanity investing labor in them because they were not a financial success?Just confirms my suspicion HN is not a forum for intellectual curiosity. It's been entirely subsumed by MBAs and wannabe billionaires.
by yabutlivnWoods
4/17/2026 at 11:40:41 PM
> You're actually arguing those highly technical engineering projects provided nothing to humanity investing labor because they were not a financial success?No. Re-read the comment.
I specifically say "No return on investment required" not "Has no return on investment". It didn't matter whether these projects earned back their money in the short term, or whether it takes the longer term of many decades.
The ISS hasn't earned back it's $150 billion, and it won't for a pretty long time yet. Doesn't mean it's not a good thing for humanity. Just means that it'd be a bad idea to have the project ran & funded by e.g. SpaceX. The project would've failed, you just can't get ROI on $150 billion within the timeframe required. SpaceX barely survived the cost of developing it's rockets. (And observe how AI spending is currently crushing the profitability of the newly-merged SpaceX-xAI.)
I'm not even saying "AI doesn't provide anything to humanity", I was saying that AI needs trillions of dollars in returns that do not appear to exist, and so it's likely to collapse.
by SlinkyOnStairs
4/18/2026 at 11:45:24 AM
Wild graphic. US spending on one flying killing machine (the F-35) is comparable to total spending on the Marshall plan to reconstruct Europe after WWII, or the interstate highway system, or all datacenters combined. Priorities!by cousin_it
4/18/2026 at 12:15:33 PM
I don't think that's right - the scale is logarithmic. The Marshall Plan is 20 times as expensiveby marche101
4/18/2026 at 6:17:43 PM
And this is why I hate log scale graphs. Even in the cases where it does have a useful effect, 90%+ of people are still going to interpret it in a linear way and therefore make it massively misleading.by 93po
4/18/2026 at 2:38:02 PM
It’s hazardous to blend fixed and variable costs.by brookst
4/17/2026 at 5:53:55 PM
The railroads and the interstate are arguably the biggest and broadest impact, especially in 2nd order effects (everything West of the Mississippi would be vastly different economically without them).I am not an ai-booster, but I would not be surprised at AI having a similar enabling effect over the long term. My caveat being that I am not sure the massive data center race going on right now will be what makes it happen.
by dghlsakjg
4/17/2026 at 6:39:24 PM
I agree that AI will probably have bigger effects that we could possibly predict right now. But unlike past booms/bubbles, I suspect the infrastructure being built now won't be useful after it resolves. The railroads, interstate system, and dotcom fiber buildout are all still useful. AI will need to get more efficient to be useful as established technology, so the huge datacenters will be overbuilt. And almost none of the Nvidia chips installed in datacenters this year will still be in use in 5 years, if they're even still functional.by delecti
4/18/2026 at 1:30:38 AM
The era of the AI data center will be brief because the models will get better and the computers will get more powerful, particularly on the desktop, laptop and phone/tablet . The transition will be like going from mainframe computers to personal computers.by Danox
4/18/2026 at 2:04:57 PM
No one's going to be able to afford that if you hollow out the consumer base by replacing people with AIby hnthrow0287345
4/18/2026 at 3:20:32 PM
Early railroads didn't have a lot of standardization, so plenty of that investment did get deprecatedby dralley
4/17/2026 at 9:25:07 PM
All of the trucks and carts and tools to build the railroads dont exist anymore. Just like the gpus wont eitherby whattheheckheck
4/18/2026 at 4:15:35 AM
In that analogy, the GPUs are like if the railroad tracks only lasted 5 years.by delecti
4/18/2026 at 1:07:56 AM
And I'm not an AI doomer, but hell no, give me another space program/station over this every single time and pretty please. We are not pioneering new engineering science or creating a pipeline of hard research and innovation that will spread in and better our everyday lives for the decades to come. We are overbuilding boring data centers packed with single-purpose chips that WILL BE obsolete within a couple years, for what? For the unhinged hope that LLM chatbots will somehow develop intelligence, and/or that people by the billions will want to pay a hefty price for dressed-up plagiarism machines. There is no indication that LLMs are a pathway to meaningful and transformative AI. Without that, there is no technical merit for the data centers being built currently to constitute future-proof infrastructure like highways and railroad networks did. There is no economical framework in which this somehow trickles down to or directly empowers the individual. This is a sham of ludicrous proportions, a sickening waste.by ezst
4/18/2026 at 1:33:37 AM
>There is no indication that LLMs are a pathway to meaningful and transformative AI.Reality check, they are already astoundingly meaningful and transformative AI. They can converse in natural language, recall any common fact off the top of their heads, do research online and synthesize new information, translate between different human languages (and explain the nuances involved), translate a vague hand wavey description into working source code (and explain how it works), find security vulnerabilities, and draw SVGs of pelicans on bicycles. All in one singularly mind-blowing piece of tech.
The age of computers that just do what you tell them to, in plain language, is upon us! My God, just look at the front page! Are we on the same HN?
by dTal
4/18/2026 at 10:39:39 AM
> Reality check, they are already astoundingly meaningful and transformative AIThe onus of the proof regarding their meaningful and transformative nature is on you.
The largest niche LLMs have so far managed to carve for themselves is software code, with the jury still on the fence as whether the productivity needle actually moved in one direction or the other, and the other, literal jury, enshrining the fact that vibe-coded software is not copyrightable and becomes a public good, that should give pause to any company living of selling software or software-related services as whether they want to poison their well.
Web search hasn't been disrupted very much either with users being quick to realise how hallucinogenic LLM summaries are (with the fact that it's baked in the tech and practically unsolvable being one of the reasons I don't consider LLMs a significant stepping stone towards actual AI).
The age of computers that respond to voice orders was 10 years ago, with Siri, Alexa, Google Assistant, nobody could care less then, and the fact the same systems became less capable after re-inventing themselves on top of LLMs probably won't have people care more now.
by ezst
4/18/2026 at 12:54:59 PM
We are in such different universes that I fear that this will not be a productive discussion; to my eyes LLMs are the most obviously socially transformative technology in my lifetime, up there with "internet" and "smartphones".You say the largest niche is software production. Okay, let's talk about that. If the jury is still out then the jury is asleep. When ChatGPT first came out - the GPT3 days, years ago, before "vibe code" was even a term - an artist friend of mine who never wrote a line of code in his life straight-up vibe coded 3d visuals to accompany a performance of the band he was in. In Processing, which he'd never heard of until ChatGPT suggested it to him. Do you realize what this means? Normies can use computers now. Actually use, not just consume. You can describe what you want and the computer will do it - will even ask you for clarification if your specification is too ambiguous. Hell, it will even educate you about the subject matter, meeting you at exactly your level, in your favorite writing style.
If you are still thinking in terms of whether vibe coded software is "copyrightable" or whether LLMs are useful for "selling software", you are a blacksmith scoffing that cars are pointless because they don't need horseshoes. Your entire framework is obsolete.
by dTal
4/18/2026 at 1:54:51 PM
You are so focused on productivity that you missed the boat on the shape of the problem.Vibe coded app are just throwaway codes that you don't understand and can't maintain. Most of our technology isn't creating new things but incremental improvement.
You are so focused on productivity when programming 's bottleneck is never about how many features you implement but how much you can understand your codebase.
Nobody cares about your internet slops but they care about verification of facts which unfortunately require human judgement.
LLM are just a different version of library code we already have, except without quality control by default.
by kiba
4/17/2026 at 6:32:53 PM
Is there really that much inefficiency in our distribution of goods and services such that AI could have this much impact?by throwaway27448
4/17/2026 at 8:20:32 PM
I think the bet is more labor replacement, not saying that's particularly reasonable eitherby fyrn_
4/17/2026 at 7:02:44 PM
> I would not be surprised at AI having a similar enabling effect over the long term.The big difference is that the current AI bubble isn't building durable infrastructure.
Building the railroads or the interstate was obscenely expensive, but 100+ years down the line we are still profiting from the investments made back then. Massive startup costs, relatively low costs to maintain and expand.
AI is a different story. I would be very surprised if any of the current GPUs are still in use only 20 years from now, and newer models aren't a trivial expansion of an older model either. Keeping AI going means continuously making massive investments - so it better finds a way to make a profit fast.
by crote
4/18/2026 at 10:13:18 AM
GPUs are consumables, not infrastructure. Model weights are the lasting thing.It's always like that with software. You can still run an OS or a program made 20 years ago, in some cases that program may in fact have no modern replacements available (think niche domains) - meanwhile, in those 20 years, you've probably churned through 5-10 generations of computing hardware.
by TeMPOraL
4/18/2026 at 2:50:24 PM
This is completely false - GPUs are not consumables, they are factors of production.Models are technologies. Without the GPUs the technology is not accessible.
You sound like someone who thinks they have a strong understanding of economics when they don't.
by wr2
4/18/2026 at 9:17:49 PM
I had to look up "factors of production" to see what this was about.Looks to me like, as with a drill bit, a GPU could be reasonably classified as either a consumable or a factor of production.
This is because GPUs wear out and fail; the smaller the features, the faster electromigration kills them.
by ben_w
4/17/2026 at 6:16:58 PM
>I am not an ai-booster, but I would not be surprised at AI having a similar enabling effect over the long term. My caveat being that I am not sure the massive data center race going on right now will be what makes it happen.Maybe? It seems as if the tech is starting to taper off already and AI companies are panicking and gaslighting us about what their newest models can actually do. If that's the case the industry is probably in trouble, or the world economy.
by operatingthetan
4/18/2026 at 1:19:25 AM
> AI companies are panicking and gaslighting us about what their newest models can actually doI think they have been gaslighting us from the beginning.
by EFreethought
4/18/2026 at 2:30:16 AM
Bernie Madoff and his ilk made way for Sam Altman and his friends.Like Madoff, they’re desperate to pump their Ponzi scheme for as long as they can.
by bigfatkitten
4/18/2026 at 2:06:12 AM
It seems a little silly to put 71 years of private-and-public-sector infrastructure development alongside something highly targeted like the Manhattan Project. It might make more sense to compare the Manhattan Project to the first transcontinental railroad, as a similar targeted but enormously ambitious project amounting to a major technical milestone.Likewise I don't think it makes sense to compare post-ChatGPT hyperscaler data center construction with all 19th-century US railroad construction. Why not include the already considerable infrastructure of pre-AI AWS/Azure? The relevant economic change isn't "AI," it's having oodles of fast compute available online and a market demanding more of it. OTOH comparing these data centers to the Manhattan Project is wrong in the opposite direction: we should really be comparing a specific headline-grabber like Stargate.
This categorization is just a confusing mishmash. The real conclusion to draw here is that we tend to spend more on long-term and broadly-defined things than we do on specific projects with specific deadlines. Indeed.
by LeCompteSftware
4/18/2026 at 1:20:11 AM
Depreciation schedule:Tulips: weeks
GPUs: 6 years
Fiber: 20-50 years
Rail, roads, bridges: 50-100+ years
Hyperscalers closer to tulips than other hard infra.
by maxglute
4/18/2026 at 2:34:17 AM
What rail, road or bridge in the US lasts 50 years? The maintenance of rail over 6 years costs more than replacing all the GPUs in a data center, even at their current markup.by casey2
4/18/2026 at 6:18:17 PM
Look up deterioration curve and maintenance curve (J shaped) for hard infra. TLDR is asset stays in good condition for 75% of lifetime i.e. decades with light maintenance (flat part of J). By roads I mean highways where most of the expense / work is in building out the base / sub base (i.e. ballasts for rail), that last decades. US is uniquely bad maintained/prevention but even then major assets do not deteriorate on GPU timeline.by maxglute
4/18/2026 at 2:39:01 AM
have you seen our rails, roads and bridges?!? 50 year old ones in many places are being referred to as “new ones” :)the only reason any “maintenance” on them is expensive is corruption which at municipal level rivals current administration in some places
by bdangubic
4/19/2026 at 3:17:37 AM
Railroad looks huge on the GDP (estimate) chart because the US transcontinental railroad was built in the mid 1800’s when the US economy was relatively tiny.by comfysocks
4/18/2026 at 12:28:22 AM
I’m surprised there is no broadband rollout or telecom network on there. I guess it’s hard to quantify the cost within a specific event?by chatmasta
4/18/2026 at 12:33:55 AM
Indeed. Or for that matter, electrification?by mongol
4/17/2026 at 5:39:14 PM
As sibling comments mentioned deceptive comparison as well. How about comparing in percentage of Gross Energy Output. https://www.sciencedirect.com/science/article/abs/pii/S09218...by j-bos
4/18/2026 at 12:20:46 AM
The railroad buildout was a lot more, idk, tangible. Most of that money was spent employing millions of people to smelt iron, lay track, build bridges, blow up mountains, etc. It’s a lot more exciting than a few freight loads of overpriced GPUs.by hyperbovine
4/18/2026 at 12:28:36 AM
Also a good point - railroads for sure brought a lot more optimism.LLMs+Data centres on the other hand...
by wr2
4/18/2026 at 6:55:43 AM
Were? How else do you expect to get goods around by land?by globular-toast