4/10/2026 at 12:42:55 AM
Somewhat related, why the creators of Zettabyte File System (ZFS) decided to make it 128 bits (writing in 2004):> Some customers already have datasets on the order of a petabyte, or 2^50 bytes. Thus the 64-bit capacity limit of 2^64 bytes is only 14 doublings away. Moore's Law for storage predicts that capacity will continue to double every 9-12 months, which means we'll start to hit the 64-bit limit in about a decade. Storage systems tend to live for several decades, so it would be foolish to create a new one without anticipating the needs that will surely arise within its projected lifetime.
* https://web.archive.org/web/20061112032835/http://blogs.sun....
And some math on what that means 'physically':
> Although we'd all like Moore's Law to continue forever, quantum mechanics imposes some fundamental limits on the computation rate and information capacity of any physical device. In particular, it has been shown that 1 kilogram of matter confined to 1 liter of space can perform at most 10^51 operations per second on at most 10^31 bits of information [see Seth Lloyd, "Ultimate physical limits to computation." Nature 406, 1047-1054 (2000)]. A fully-populated 128-bit storage pool would contain 2^128 blocks = 2^137 bytes = 2^140 bits; therefore the minimum mass required to hold the bits would be (2^140 bits) / (10^31 bits/kg) = 136 billion kg.
> To operate at the 10^31 bits/kg limit, however, the entire mass of the computer must be in the form of pure energy. By E=mc^2, the rest energy of 136 billion kg is 1.2x10^28 J. The mass of the oceans is about 1.4x10^21 kg. It takes about 4,000 J to raise the temperature of 1 kg of water by 1 degree Celcius, and thus about 400,000 J to heat 1 kg of water from freezing to boiling. The latent heat of vaporization adds another 2 million J/kg. Thus the energy required to boil the oceans is about 2.4x10^6 J/kg 1.4x10^21 kg = 3.4x10^27 J. Thus, fully populating a 128-bit storage pool would, literally, require more energy than boiling the oceans.*
* Ibid.
by throw0101d
4/10/2026 at 8:17:40 AM
> In particular, it has been shown that 1 kilogram of matter confined to 1 liter of space can perform at most 10^51 operations per second on at most 10^31 bits of informationI believe the Bekenstein bound for holographic information on a 1 liter sphere, using space at the Planck scale for encoding, instead of matter, is about 6.7×10^67.
I confess I got that number by taking round trips through multiple models to ensure there was a clear consensus, as my form of "homework", as this is not my area of expertise.
As far as figuring out energy or speed limits for operations over post-Einsteinian twisted space, that will require new physics, so I am just going to wait until I have a 1 liter Planckspace Neo and just measure the draw while it counts to a very big number for a second. (Parallel incrementing with aggregation obviously allowed.)
Point being, there is still a lot of room at the bottom.
Interesting thought. Space can expand faster than the speed of light over significant distances, without breaking the speed limit locally.
But what happens if complex living space begins absorbing all the essentially flat local space around it? Is there a speed limit to space absorption? If space itself is shrinking, due to post-Einsteinian structures/packing, then effective speed limits go away. As traversal distances, and perhaps even the meaning of distance, disappear. So, perhaps not. I call this the "AI Crunch" end-of-the-universe scenario.
That is the computer I want. And I believe that sets a new upper bound for AI maximalism.
by Nevermark
4/10/2026 at 9:50:42 AM
I think you would very much enjoy this book: https://share.google/boWcVLRiYz0c7EmKhThey talk quite a bit about this sort of thing at the end...
by limbicsystem
4/10/2026 at 1:31:59 AM
Single data sets surpassed 2^64 bytes over a decade ago. This creates fun challenges since just the metadata structures can't fit in the RAM of the largest machines we build today.by jandrewrogers
4/10/2026 at 1:53:42 AM
Virtualization has pushed back the need for a while, but we are going to have to look at pointers larger than 64 bit at some point. It's also not just about the raw size of datasets, but how we get a lot of utility out of various memory mapping tricks, so we consume more address space than the strict minimum required by the dataset. Also if we move up to 128 bit a lot more security mitigations become possible.by jasonwatkinspdx
4/10/2026 at 3:35:53 AM
Please keep in mind that doubling isn't the only option. There's lots of numbers between 64 and 128.by eru
4/10/2026 at 2:30:59 AM
By virtualization are you referring to virtual memory? We haven't even been able to mmap() the direct-attached storage on some AWS instances for years due to limitations on virtual memory.With larger virtual memory addresses there is still the issue that the ratio between storage and physical memory in large systems would be so high that cache replacement algorithms don't work for most applications. You can switch to cache admission for locality at scale (strictly better at the limit albeit much more difficult to implement) but that is effectively segmenting the data model into chunks that won't get close to overflowing 64-bit addressing. 128-bit addresses would be convenient but a lot of space is saved by keeping it 64-bit.
Space considerations aside, 128-bit addresses would open up a lot of pointer tagging possibilities e.g. the security features you allude to.
by jandrewrogers
4/10/2026 at 2:47:45 AM
> By virtualization are you referring to virtual memory?No, I mean k8s style architecture, where you take physical boxes and slice them into smaller partitions, hence the dataset on each partition is smaller than the raw hardware capability. That reduces the pressure towards the limit.
by jasonwatkinspdx
4/10/2026 at 2:52:11 AM
Ah yeah, that makes sense. With a good enough scheduler that starts to look a lot like a cache admission architecture.by jandrewrogers
4/10/2026 at 3:44:18 AM
I'd never thought of it that way, and it's an interesting perspective.by jasonwatkinspdx
4/10/2026 at 1:16:55 AM
Very interesting, could someone please do the same computation for filling 64 bit storage?by popol12
4/10/2026 at 1:52:03 AM
16 million terablocks, or 8 billion terabytes.Or a third of a billion 24 TB drives, which is one of the larger sizes currently available.
Some random search results say the global hard drive market is around an eighth of a billion units, but of course much of that will be smaller sizes.
So that should be physically realizable today (well, with today's commercial technology), with only a few years of global production.
by tbrownaw
4/10/2026 at 10:57:32 AM
> Or a third of a billion 24 TB drives, which is one of the larger sizes currently available.For the record, 44TB drives have been announced in March 2026:
* https://www.seagate.com/ca/en/stories/articles/seagate-deliv...
by throw0101d
4/10/2026 at 1:57:58 AM
> 16 million terablocks, or 8 billion terabytes.To be clear, the first quote was talking about 2^64 bytes, and you're talking about 2^64 blocks.
Edit: Though confusingly the second part talked about 2^128 blocks.
Also these days I'd assume 4KB blocks instead of 512 bytes.
by Dylan16807
4/10/2026 at 2:28:32 AM
> To be clear, the first quote was talking about 2^64 bytesThat's 16 exabytes. Wikipedia cites a re:invent video to say that Amazon S3 has "100s of exabytes" in it.
So it not only could theoretically be done, but has been done.
by tbrownaw
4/10/2026 at 2:47:03 AM
Storage densities can be extremely high. Filling 2^64 of storage is very doable and people have been doing it for a while. It all moves downstream; I remember when a 2^32 was an unimaginable amount of storage.Many petabytes fit in a single rack and many data sources generate several petabytes per day. I'm aware of sources that in aggregate store exabytes per day. Most of which gets promptly deleted because platforms that can efficiently analyze data at that scale are severely lacking.
I've never heard of anyone actually storing zettabytes but it isn't beyond the realm of possibility in the not too distant future.
by jandrewrogers
4/10/2026 at 1:44:40 AM
You want someone to put "3.4*10^27 / 2^64" into a calculator? 200 million joules, using all the same assumptions. 50kWh. Though that leaves the question of how the energy requirements change when we're not going for extreme density (half a nanogram??).If we instead consider a million 18TB hard drives, and estimate they each need 8 watts for 20 hours to fill up, 2^64 bytes take 160MWh to write on modern hardware. And they'll weigh 700 tons.
Edit: The quote is inconsistent about whether it wants to talk about bytes or blocks, so add or subtract a factor of about a thousand depending on what you want.
by Dylan16807