3/9/2026 at 12:57:11 PM
This is all starting to smell like financial engineering games. Traditionally nobody in their right mind would give a startup billions to build data centers. For a long list of reasons, that’s kinda nuts.However what it does allow all these companies investing to do is fund significant capital expenditure but hide it on their balance sheet. They all know if they funded capex directly it would create a deprecation storm that would tank their future earnings. Instead they give the money to another entity to do the building and magically it’s (the equity) now just an asset on their balance sheet with no deprecation. It’s “worth” a lot as a line item there, but only because the hype driving this financial engineering keeps the shares valuable.
Meanwhile the startup isn’t public and thus the fact that it has this massive deprecation on the books is mostly out of sight and out of mind, with some random sky high valuation that’s not based in any normal sense of business reality.
That all works great… until the bubble busts of course.
by cmiles8
3/9/2026 at 1:45:45 PM
The business plan makes sense to me. They are a company that is focussed specifically on building AI data centers, which is a huge part of the economy at the moment. The big cloud players know about generic data centers, but there are likely big efficiency wins to be gained by specializing on AI. There is also the geopolitical angle: European countries (and others!) will likely trust a UK-based company more than one of the American BigCos. NVidia is a great partner and investor for them: NScale will buy billions worth of NVDA chips, and also send information and learnings about the unique needs of the market to the chipmaker.That being said, financial engineering tricks like depreciation and tax sheltering are of course hugely important in the global economy. It's likely that NVDA has a lot of cash sitting in Europe that it doesn't want to repatriate because it would have to pay taxes on it.
by d_burfoot
3/9/2026 at 1:51:48 PM
It makes sense till it doesn’t. History tells us that suppliers funding demand like this tends to end badly.by cmiles8
3/9/2026 at 1:55:55 PM
> there are likely big efficiency wins to be gained by specializing on AII've seen this suggested before as well, but I don't think I've heard a lot of actual concrete things. What big efficiency wins are to be had specializing on "AI" datacenters as opposed to what the past mega hyperscalers have done? What techniques seem to be out there that cloud providers and others have slept on? What makes them so different, in terms of operating a datacenter?
I'm genuinely asking.
by vel0city
3/9/2026 at 2:06:15 PM
I know nothing about the inner workings of a large scale AI datacenter but I'd imagine the power and cooling requirements are more specialised than your average datacenter that mostly has to handle transmitting large amounts of data over the internet (not that that isn't computationally expensive, I just imagine LLMs (especially at the current scale of their deployment) are much more demanding)by mghackerlady
3/9/2026 at 2:12:26 PM
> I'd imagine the power and cooling requirements are more specialised than your average datacenterBut are they actually doing things differently than the high compute parts of the hyperscaled datacenters? Are there radical new ways of distributing heat in the datacenter that only makes sense at that level of energy usage per square foot? Is AI energy use that much higher per square foot of other high-compute parts of datacenters, or is it just that its now something like 90% of the floor plan versus maybe only 50-60%?
> handle transmitting large amounts of data over the internet
I certainly can't speak for all datacenters, and I've never been in a hyperscaler datacenter. But of all the datacenters I've spent time in, the space for the outside network connectivity was rather small compared to the rest of the space for storage and compute. Think a few small office suites dedicated to outside networks coming in and connecting to the clients in the datacenter compared to a medium to large sized warehouse full of compute and storage.
by vel0city
3/9/2026 at 2:41:23 PM
There's "high compute", and then there's proper HPC. AI these days is way more on the HPC end of the scale. The GPUs are doing computations using 2-bit and 4-bit numbers and not 64-bit, but everything else is going to be comparable.by zozbot234
3/9/2026 at 1:41:02 PM
A few billion here and there are needed to keep the game going. Small cost of doing business, considering the alternativeby dolphinscorpion
3/9/2026 at 2:29:34 PM
Deprecation is the right way to spell it when it comes to GPUs, because the hardware becomes totally obsolete and uncompetitive after only a few years and you have to replace it wholesale. It's a slow-moving trainwreck.by zozbot234
3/9/2026 at 1:32:16 PM
>> This is all starting to smell like financial engineering games."Starting to"? :)
by enraged_camel