3/15/2026 at 3:46:05 PM
It stands out, because it didn't sell. Which is weird because there were some pretty big pros about using them. The latency for updating 1 byte was crazy good. Some databases or journals for something like zfs really benefited from this.by hbogert
3/15/2026 at 4:35:57 PM
Intel did a spectacularly poor job with the ecosystem around the memory cells. They made two plays, and both were flops.1. “Optane” in DIMM form factor. This targeted (I think) two markets. First, use as slower but cheaper and higher density volatile RAM. There was actual demand — various caching workloads, for example, wanted hundreds of GB or even multiple TB in one server, and Optane was a route to get there. But the machines and DIMMs never really became available. Then there was the idea of using Optane DIMMs as persistent storage. This was always tricky because the DDR interface wasn’t meant for this, and Intel also seems to have a lot of legacy tech in the way (their caching system and memory controller) and, for whatever reason, they seem to be barely capable of improving their own technology. They had multiple serious false starts in the space (a power-supply-early-warning scheme using NMI or MCE to idle the system, a horrible platform-specific register to poke to ask the memory controller to kindly flush itself, and the stillborn PCOMMIT instruction).
2. Very nice NVMe devices. I think this was more of a failure of marketing. If they had marketed a line of SSDs that, coupled with an appropriate filesystem, could give 99% fsync latency of 5 microseconds and they had marketed this, I bet people would have paid. But they did nothing of the sort — instead they just threw around the term “Optane” inconsistently.
These days one could build a PCM-backed CXL-connected memory mapped drive, and the performance might be awesome. Heck, I bet it wouldn’t be too hard to get a GPU to stream weights directly off such a device at NVLink-like speeds. Maybe Intel should try it.
by amluto
3/15/2026 at 4:46:18 PM
One of the many problems was trying to limit the use of Optane to Intel devices. They should have manufactured and sold Optane memory and let other players build on top of it at a low level.by orion138
3/15/2026 at 5:09:20 PM
> Optane memoryWhich “Optane memory”? The NVMe product always worked on non-Intel. The NVDIMM products that I played with only ever worked on a very small set of rather specialized Intel platforms. I bet AMD could have supported them about as easily as Intel, and Intel barely ever managed to support them.
by amluto
3/15/2026 at 5:20:10 PM
The consumer "Optane memory" products were a combination of NVMe and Intel's proprietary caching software, the latter of which was locked to Intel's platforms. They also did two generations of hybrid Optane+QLC drives that only worked on certain Intel platforms, because they ran a PCIe x2+x2 pair of links over a slot normally used for a single X2 or x4 link.Yes, the pure-Optane consumer "Optane memory" products were at a hardware level just small, fast NVMe drives that could be use anywhere, but they were never marketed that way.
by wtallis
3/15/2026 at 5:38:48 PM
Exactly. I happen to have all AMD sitting around here, and buying my first Optane devices was a gamble, because I had no idea if they'd work. Only reason I ever did, is they got cheap at one point and I could afford the gamble.That uncertainty couldn't have done the market any favors.
by myself248
3/15/2026 at 5:31:43 PM
I feel like this is proving my point. You can’t read “Optane” and have any real idea of what you’re buying.Also… were those weird hybrid SSDs even implemented by actual hardware, or were they part of the giant series of massive kludges in the “Rapid Storage” family where some secret sauce in the PCIe host lied to the OS about what was actually connected so an Intel driver could replace the OS’s native storage driver (NVMe, AHCI, or perhaps something worse depending on generation) to implement all the actual logic in software?
It didn’t help Intel that some major storage companies started selling very, very nice flash SSDs in the mean time.
by amluto
3/15/2026 at 5:39:33 PM
> were those weird hybrid SSDs even implemented by actual hardware, or were they part of the giant series of massive kludgesThey were definitely part of the series of massive kludges. But aside from the Intel platforms they were marketed for, I never found a PCIe host that could see both of the NVMe devices on the drive. Some hosts would bring up the x2 link to the Optane half of the drive, some hosts would bring up the x2 link to the QLC half of the drive, but I couldn't find any way to get both links active even when the drive was connected downstream of a PCIe switch that definitely had hardware support for bifurcation down to x2 links. I suspect that with appropriate firmware hacking on the host side, it may have been possible to get those drives fully operational on a non-Intel host.
by wtallis
3/16/2026 at 3:33:25 AM
Why on Earth did Intel implement this as a 2x2 device? They could have implemented multiple functions or they could have used a PCIe switch or they could have exposed their device as an NVMe device with multiple namespaces, etc. (I won’t swear that all of these would have worked nicely. But all of them would have performed better than arbitrarily splitting the link in half.)Maybe they didn’t own any of the IP for the conventional SSD part and couldn’t make it play ball?
by amluto
3/16/2026 at 5:44:23 AM
The Optane side of the drive used the same x2 controller as the pure-Optane cache drives. The NAND side used a Silicon Motion controller, same as their consumer QLC drives of the era. They almost literally just crammed their two existing consumer products onto one PCB and shipped it. Intel was never interested enough in the consumer applications of Optane to design a good, useful SSD controller around it, and they weren't going to let a third-party like Silicon Motion make an Optane-compatible controller.by wtallis
3/16/2026 at 4:13:36 AM
Or perhaps they just made a number of incredibly poor decisions. They seem to have been doing that for the better part of a couple decades now.by fc417fc802
3/15/2026 at 11:22:52 PM
This.Optane died because Intel wanted to kill AMD and ARM with it. Killed Intel and a great technology along with it.
by rngfnby
3/15/2026 at 4:08:51 PM
>Which is weird....It isn't weird at all. I would be surprised if it ever succeed in the first place.
Cost was way too high. Intel not sharing the tech with others other than Micron. Micron wasn't committed to it either, and since unused capacity at the Fab was paid by Intel regardless they dont care. No long term solution or strategy to bring cost down. Neither Intel or Micron have a vision on this. No one wanted another Intel only tech lock in. And despite the high price, it barely made any profits per unit compared to NAND and DRAM which was at the time making historic high profits. Once the NAND and DRAM cycle went down again cost / performance on Optane wasn't as attractive. Samsung even made some form of SLC NAND that performs similar to Optane but cheaper, and even they end up stopped developing for it due to lack of interest.
by ksec
3/15/2026 at 7:36:07 PM
A ways back, I wrote a sort of database that was memory-mapped-file backed (a mistake, but I didn’t know that at the time), and I would have paid top dollar for even a few GB of NVDIMMs that could be put in an ordinary server and could be somewhat straightforwardly mounted as a DAX filesystem. I even tried to do some of the kernel work. But the hardware and firmware was such a mess that it was basically a lost cause. And none of the tech ever seemed to turn into an actual purchasable product. I’m a bit suspicious that Intel never found product-market fit in part because they never had a credible product on the NVDIMM side.Somewhere I still have some actual battery-backed DIMMs (DRAM plus FPGA interposer plus awkward little supercapacitor bundle) in a drawer. They were not made by Intel, but Intel was clearly using them as a stepping stone toward the broader NVDIMM ecosystem. They worked on exactly one SuperMicro board, kind of, and not at all if you booted using UEFI. Rebooting without doing the magic handshake over SMBUS [0] first took something like 15 minutes, which was not good for those nines of availability.
[0] You can find my SMBUS host driver for exactly this purpose on the LKML archives. It was never merged, in part, because no one could ever get all the teams involved in the Xeon memory controller to reach any sort of agreement as to who owned the bus or how the OS was supposed to communicate without, say, defeating platform thermal management or causing the refresh interval to get out of sync with the DIMM temperature, thus causing corruption.
I’m suspicious that everything involved in Optane development was like this.
by amluto
3/16/2026 at 7:36:24 AM
If you're thinking about reliability in terms of 9s you should probably not be depending on a single machine. Reboots could take hours and be fine if your architecture is set up for reliability.by simulator5g
3/15/2026 at 4:57:33 PM
I worked at Micron in the SSD division when Optane (originally called crosspoint “Xpoint”) was being made. In my mind, there was never a real serious push to productize it. But it’s not clear to me whether that was due to unattractive terms of the joint venture or lack of clear product fit.There was certainly a time when it seemed they were shopping for engineers opinions of what to do with it, but I think they quickly determined it would be a much smaller market anyway from ssds and didn’t end up pushing on it too hard. I could be wrong though, it’s a big company and my corner was manufacturing and not product development.
by deepsquirrelnet
3/15/2026 at 5:28:34 PM
I worked at Intel for a while and might be able to explain this.There were/are often projects that come down from management that nobody thinks are worth pursuing. When i say nobody, it might not just be engineers but even say 1 or 2 people in management who just do a shit roll out. There are a lot of layers of Intel and if even one layer in the Intel Sandwich drag their feet it can kill an entire project. I saw it happen a few times in my time there. That one specific node that intel dropped the ball on kind of came back to 2-3 people in one specific department, as an example.
Optane was a minute before I got there, but having been excited about it at the time and somewhat following it, that's the vibe I get from Optane. It had a lot of potential but someone screwed it up and it killed the momentum.
by chrneu
3/15/2026 at 5:41:11 PM
Are you referring to the Intel 10nm struggles in your reference to 2-3 people?by osnium123
3/15/2026 at 6:01:57 PM
This is actually insane. Do you mean 2-4 people in one department basically killed Intel? Roll to disbelief.by empiricus
3/15/2026 at 7:26:45 PM
Yes this is pretty common in large enterprise-ey tech companies that are successful. There are usually a small group of vocal members that have a strong conviction and drive to make a vision a reality. This is contrary to popular belief that large companies design by committee.Of course it works exceptionally well when the instinct turns out to be right. But can end companies if it isn’t.
by LASR
3/15/2026 at 6:15:51 PM
It's somewhat plausible that a small group of people in one department were responsible for the bad bets that made their 10nm process a failure. But it was very much a group effort for Intel to escalate that problem into the prolonged disaster. Management should have stopped believing the undeliverable promises coming out of their fab side after a year or two, and should have started much sooner to design chips targeting fab processes that actually worked.by wtallis
3/15/2026 at 7:28:47 PM
A friend was working at Micron on a rackmount network server with a lot of flash memory, I didn't ask at the time what kind of flash it used. The project was cancelled when nearly finished.by rjsw
3/15/2026 at 4:28:13 PM
Cost was fantastically cheap, if you take into account that Optane is going to live >>10x longer than a SSD.For a lot of bulk storage, yes, you don't have frequently changing data. But for databases or caches, that are under heavy load, optane was not only far faster, but if looking at life-cycle costs, way way less.
by jauntywundrkind
3/15/2026 at 5:15:55 PM
Optane was in the market during a time when the mainstream trend in the SSD industry was all about sacrificing endurance to get higher capacity. It's been several years, and I'm not seeing a lot of regrets from folks who moved to TLC and QLC NAND, and those products are more popular than ever.The niche that could actually make use of Optane's endurance was small and shrinking, and Intel had no roadmap to significantly improve Optane's $/GB which was unquestionably the technology's biggest weakness.
by wtallis
3/16/2026 at 1:42:49 AM
> I'm not seeing a lot of regrets from folks who moved to TLC and QLC NAND, and those products are more popular than ever.That's interesting. Even TLC has huge limitations, but QLC is basically useless unless you use it as write-once-read-many memory.
I wish I have bought a lot of SSDs when you could still buy MLC ones.
by raron
3/16/2026 at 2:31:35 AM
> QLC is basically useless unless you use it as write-once-read-many memoryThe market thoroughly disagrees with your stupid exaggeration. QLC is a high-volume mainstream product. It's popular in low-end consumer SSDs, where the main problem is not endurance but sustained performance (especially writing to a mostly-full drive). A Windows PC is hardly a WORM workload.
by wtallis
3/16/2026 at 4:23:53 AM
Seems like it is though? Most consumer usage does not have much churn. For things like the browser cache that do churn the total volume isn't that high.The comparison here is database and caching workloads in the datacenter that experience high churn at an extremely high sustained volume. Many such workloads exist.
by fc417fc802
3/16/2026 at 5:47:16 AM
There's a very big difference between a workload where you have to take care to structure your IO to minimize writes so you don't burn out the drive, and a workload that is simply easy enough that you don't have to care about the write endurance because even the crappy drives will last for years.by wtallis
3/16/2026 at 6:08:39 AM
Of course. The inferior but cheaper technology is more cost effective in most cases but for certain workloads that won't be the case despite being more affordable per unit upfront.The workloads flash is more cost effective for (ie most of them) either aren't all that write heavy or alternatively leave the drive sitting idle the vast majority of the time. The typical consumer usecase is primarily reads while it mostly sits idle, with the relevant performance metrics largely determined by occasional bursts of activity.
by fc417fc802
3/16/2026 at 6:31:49 AM
Consumer usage does not have much churn, but the average desktop is probably doing 5-50 drive writes per year. That's far away from a heavy database load, but it's just as far away from WORM.by Dylan16807
3/15/2026 at 9:01:29 PM
So instead of replacing every 5 years you replace every 5 years because if you need that level of performance you're replacing servers every 5 years anywayby PunchyHamster
3/15/2026 at 7:50:50 PM
Write endurance of the drive would be measured in TBW, and TLC flash kept adding enough 3D layers to stay cheap enough, quickly enough, that Optane never really beat their pricing per TBW to make a practical product.I have to wonder if it isn't usable for some kind of specialized AI workflow that would benefit from extremely low latency reads but which is isn't written often, at this point. Perhaps integrated in a GPU board.
by mapt
3/15/2026 at 8:27:08 PM
Optane practical TBW endurance is way higher than that of even TLC flash, never mind QLC or PLC which is the current standard for consumer NAND hardware. It even seems to go way beyond what's stated on the spec sheet. However, while Optane excels for write-heavy workloads (not read-heavy, where NAND actually performs very well) these are also power-hungry which is a limitation for modern AI workflow.by zozbot234
3/16/2026 at 3:30:25 AM
You're conflating two things. Yes, Optane would survive more writes. But it wouldn't survive more TBW/$, because much larger flash drives were available cheaper. Double the size of the drive using identical technology, and you double TBW ratings.by mapt
3/16/2026 at 4:09:00 AM
This was very clearly not true at the time for the actual implied TBW figures of even a tiny Optane drive, and is not even true today when you account for the much lower DWPD of TLC/QLC media. Do the math, $1/GB vs $0.1/GB where the actual difference in DWPD per spec sheets is more like two or three orders of magnitude, with the real-world practical one being quite possibly larger. (Have people even seen Optane actually fail in the wild due to media wear out? This happens all the time with NAND.)by zozbot234
3/16/2026 at 4:35:02 AM
> it wouldn't survive more TBW/$Yes it would, by an almost arbitrarily large margin. You can test this out for yourself. Overwrite one of each in an endless loop. Whenever the flash based drive fails, replace it and continue. See how long it takes for the optane to fail.
You should be able to kill a typical consumer flash drive in well under a week. Even high end enterprise gear will be dead within a couple of months.
by fc417fc802
3/15/2026 at 8:26:54 PM
The extra capacity of modern SSD is a good point, especially now that we have 100TB+ SSD.But Optane still offered 100 DWPD (drive writes per day), up to 3.2TB. Thats still just so many more DWPD than flash ssd. A Kioxia CM8V for example will do 12TB at 3 DWPD. The net TBW is still 10x apart.
You can get back to high endurance with SLC drives like the Solidigm p7-p5810, but you're back down to 1.6TB and 50 DWPD, so, 1/4 the Intel P5800X endurance, and worse latencies. I highly suspect the drive model here is a homage, and in spite of being much newer and very expensive, the original is still so much better in so many ways. https://www.solidigm.com/content/solidigm/us/en/products/dat...
You also end up paying for what I assume is a circa six figure drive, if you are substituting DWPD with more capacity than you need. There's something elegant about being able to keep using your cells, versus overbuying on cells with the intent to be able to rip through them relatively quickly.
by jauntywundrkind
3/16/2026 at 3:34:08 AM
We don't care about (TBW/TB) at the consumer level, we care about (TBW/$), and 3D TLC was far, far cheaper per TB, so much so that TBW/$ was not a numerical advantage of Optane.That left ONLY the near-RAM-read-latency, which is only highly beneficial on specific workloads. Then they didn't invest in expanding killer app software that could utilize that latency, and didn't drop prices sufficiently to make it highly competitive with big RAMdisks.
by mapt
3/16/2026 at 6:44:48 AM
In 2018, with Optane drives launching around $1.50/GB and TLC flash drives around $0.15/GB, it wasn't that much cheaper. As far as I'm aware Optane had a lot more than 10x the endurance.by Dylan16807
3/16/2026 at 4:27:20 PM
10x endurance and 10x latency reduction, for 10x the price. It was closer to 5x the price with the closeout firesale after it was killed.by ece
3/15/2026 at 3:55:05 PM
It feels like everyone figured out what to do with them and how just about when they stopped making them.by bombcar
3/15/2026 at 4:03:34 PM
Same for the Larabee / Knights architecture. Would sure be fun to play around with a 500 core Knights CPU with a couple TB of optane for LLM inference.Intel's got an amazing record of axing projects as soon as they've done the hard work of building an ecosystem.
by timschmidt
3/15/2026 at 4:08:14 PM
> 500 coreThe newest fully E-core based Xeon CPUs have reached that figure by now, at least in dual-socket configs.
by zozbot234
3/15/2026 at 4:14:57 PM
Yup. And high end GPU compute now has on-package HBM like Knight's had a decade ago, and those new Intel CPUs are finally shipping with AVX reliably again. We lost a decade for workloads that would benefit from both.by timschmidt
3/16/2026 at 4:56:01 AM
But I'm surprised PCIe based CPU+RAM modules aren't a "thing" since that's basically what a GPU is if you ignore all the rather fundamental differences. Seems like it would be convenient to cheaply attach additional compute without worrying about all the other stuff.I suppose I'm just reinventing SXM at this point. The BC-250 comes close but despite the formfactor it isn't actually a PCIe card. Although if it integrated a 100 Gbit SFP slot it might actually be superior to a solution that resided in a host system. But the BC-250 is very much an anomaly as opposed to the norm.
by fc417fc802
3/16/2026 at 5:00:45 AM
You need CXL to extend the cache coherency properties of actual RAM over a remote link. That's costly tech. Otherwise, you're relying on the OS (and even the compiler/basic libraries, since you need to make fences, etc. OS-visible) to paper over the differences by doing its own implementation of distributed shared memory (this is known as a 'SSI' or single-system-image approach) which has significant challenges and is closer to the spirit of setting up swap.by zozbot234
3/16/2026 at 5:17:58 AM
I didn't mean anything like that. Just the equivalent of a GPU with the ability to run arbitrary CPU oriented programs.Of course GPUs do many tasks very well but there are also plenty of problems that aren't well suited to them. Well I suppose I've answered my own question at this point. There probably just aren't enough real world problems that aren't amenable to running on a GPU while also being either compute or memory bandwidth bound.
Still the near-monoculture does strike me as odd. I guess GPUs have bifurcated into enterprise versus consumer at this point but otherwise all we've got is a single CPU example from over a decade ago and a single alternative take on the concept from Fujitsu. Is it just due to the obscene cost of masks for modern process nodes?
by fc417fc802
3/16/2026 at 9:31:04 AM
Things like that existed in the category of accelerator cards. Xeon Phi (Knights) is one example, focused on core count. Some from HP have soldered on SSDs too. You also had blade servers which is more focused on that use case, though that's going out of style.I don't think PCIe is really a good fit for general CPU tasks. You need big heatsinks and power and can't fit that much RAM on board.
by akvadrako
3/15/2026 at 3:56:05 PM
When most people are running databases on AWS RDS, or on ridiculous EBS drives with insanely low throughput and latency, it makes sense to me.There are very few applications that benefit from such low latency, and if one has to go off the standard path of easy, but slow and expensive and automatically backup up, people will pick the ease.
Having the best technology performance is not enough to have product market fit. The execution required from the side of executives at Intel is far far beyond their capability. They developed a platform and wanted others to do the work of building all the applications. Without that starting killer app, there's not enough adoption to build an ecosystem.
by epistasis
3/15/2026 at 5:34:02 PM
> There are very few applications that benefit from such low latencyBasically any RDBMS? MySQL and Postgres both benefit from high performance storage, but too many customers have moved into the cloud where you can’t get NVMe-like performance for durable storage for anything remotely close to a worthwhile price.
by amluto
3/15/2026 at 5:52:44 PM
I'm saying that there are very few downstream applications that use databases that benefit from reducing latency beyond the slow performance of the cloud. Running your database on VMs or baremetal gives better performance, but almost no applications built on databases bother to do it.by epistasis
3/15/2026 at 4:32:43 PM
IMO, the reason they didn't sell is the ideal usage for them is pairing them with some slow spinning disks. The issue Optane had is that SSD capacity grew dramatically while the price plummeted. The difference between Optane and SSDs was too small. Especially since the M.2 standard proliferated and SSDs took advantage of PCI-E performance.I believe Optane retained a performance advantage (and I think even today it's still faster than the best SSDs) but SSDs remain good enough and fast enough while being a lot cheaper.
The ideal usage of optane was as a ZIL in ZFS.
by cogman10
3/15/2026 at 4:37:07 PM
That may have been the ideal usage back in the day, but ideal usage now is just for setting up swap. Write-heavy workloads are king with Optane, and threshing to swap is the prototypical example of something that's so write-heavy it's a terrible fit for NAND. Optane might not have been "as fast as DRAM" but it was plenty close enough to be fit for purpose.by zozbot234
3/15/2026 at 7:12:46 PM
That would be fine if I could put it in an M.2 slot. But all my computers already have RAM in their RAM slots, and even if I had a spare RAM slot, I don't know that I'd trust the software stack to treat one RAM slot as a drive...And their whole deal was making RAM persistent anyway, which isn't exactly what I want.
by mort96
3/15/2026 at 7:16:32 PM
Optane M.2-format hardware exists.by zozbot234
3/15/2026 at 7:39:12 PM
Interesting, all I ever saw advertised was that weird persistent kinda slow RAM stick. Does the M.2 version just show up as a normal block device or is that too trying to be persistent RAM?by mort96
3/16/2026 at 3:31:11 AM
Just normal (and fast) block storage.by kube-system
3/15/2026 at 9:22:36 PM
Iirc it wasn't great because higher power == more heat thoughby saxonww
3/15/2026 at 9:34:00 PM
That could be addressed with a small NVMe heatsink. They're available and their use is advised already for NAND PCIe 4.0 and 5.0 hardware, but they would fit the Optane use just as well.by zozbot234
3/15/2026 at 4:45:01 PM
> The ideal usage of optane was as a ZIL in ZFS.It was also the best boot drive money could buy. Still is, I think, though other comments in the thread ask how it compares against today's best, which I'd also love to see.
by exmadscientist
3/15/2026 at 5:10:12 PM
This concept was very popular back in the days when computers used to boot from HDD, but now it doesn't make much sense. I wouldn't notice If my laptop boots for 5 sec instead of 10.by gozzoo
3/15/2026 at 5:40:01 PM
At the time of their introduction Optane drives were noticeably faster to boot your machine than even the fastest available Flash SSD. So in a workstation with multiple hard drives installed anyway, buying one to boot off of made decent sense.If they had been cheaper, I think they'd have been really, really popular.
by exmadscientist
3/16/2026 at 6:52:24 AM
What concept was very popular in those days?By my reckoning, there was zero overlap between the period of time where a reasonable computer configurer would pick a hard drive to boot from and the period of time where Optane was available.
And even for the general concept of a cache drive, I don't think I've ever seen it do well in the mainstream. Just a few niche roles, and some hybrid drives that sucked because they had small flash chips and only used them as a read cache, not a write cache.
by Dylan16807
3/15/2026 at 4:38:24 PM
Not just capacity but SSD speeds also improved to the point it was good enough for many high memory workloads.by bushbaba
3/15/2026 at 7:09:04 PM
I never understood what they're meant to do. Intel seemed to picture some future where RAM is persistent; but they were never close to fast enough to replace RAM, and the option to reboot in order to fix some weird state your system has gotten itself into is a feature of computers, not a problem to work around.by mort96
3/15/2026 at 11:06:37 PM
When the PDIMMs were used with an appropriate file system + kernel, it was pretty cool. NTFS + DAX + kernel support yielded a file system where mmap’ing didn’t page fault. No page faults because the file content is already there, instantly.So if you had mmap heavy read/write workloads… you could do some pretty cool stuff.
by trentnelson
3/15/2026 at 6:28:14 PM
In "databases and journals" you rarely update just one byte, you do a transaction that updates data, several indexes and metadata. All of that needs to be atomic.Power failure can happen in between any of "1 byte updates with crazy latencies." However small latency is, power failure is still faster. Usually, there is a write ahead or some other log that alleviates the problem, this log is usually written in streaming fashion.
What is good, though, is that "blast radius" [1] of failure is smaller than usual - failed one byte write rarely corrupts more that one byte or cache line. SQLite has to deal with 512 (and even more) bytes long possible corruptions on most disks, with Optane it is not necessarily so. So, less data to copy, scan, etc.
by thesz
3/16/2026 at 1:13:49 AM
> However small latency is, power failure is still faster.A fancy switching power supply with a friendly power factor (looks like a resistive load, rather than drawing more amps during the lower voltage parts of the waveform) actually will have non-zero fall time when suddenly unplugged.
by tbrownaw
3/15/2026 at 9:03:46 PM
It's not. You won't be writing one byte, ever (even if you had layers that actually supported less-than-block writes), because the overhead of instruction would be massive and you'd be murdering both latency and bandwidth for anything non-trivialby PunchyHamster
3/15/2026 at 4:04:32 PM
Optane didn't sell because they focused on their weird persistent DIMM sticks, which are a nightmare for enterprise where for many ordinary purposes you want ephemeral data that disappears as soon as you cut power. Thet should have focused on making ordinary storage and solving the interconnect bandwidth and latency problems differently, such as with more up-to-date PCIe standards.by zozbot234
3/15/2026 at 6:52:09 PM
PCIe was a bottleneck in consumer boxes, but that wasn't the whole problem. Optane's low latency and write endurance looked great on paper, yet once you put it behind SSD controllers and file systems built around NAND assumptions, a lot of the upside got shaved off before users ever saw it."Just make it a faster SSD" was never a business. The DIMMs were weird, sure, but the bigger issue was that Optane made the most sense when software treated storage and memory as one tier, and almost nobody was going to rewrite kernels, DBs, and apps for a product that cost more than flash and solved pain most buyers barely felt.
by hrmtst93837
3/15/2026 at 9:05:03 PM
> and file systems built around NAND assumptions, a lot of the upside got shaved off before users ever saw it.What file systems ? Most common one you'd find would be ext4 or XFS and neither of them are
by PunchyHamster
3/15/2026 at 4:24:42 PM
I don't think that would be my main complaint. Sticking optane in a dimm was just awkward as hell. You now have different bits of memory with very different characteristics, & you lose a ton of bandwidth.If CXL was around at the time it would have been such a nice fit, allowing for much lower latency access.
It also seems like in spite of the bad fit, there were enough regular options drives, and they were indeed pretty incredible. Good endurance, reasonable price (and cheap as dirt if you consider that endurance/lifecycle cost!), some just fantastic performance figures. My conclusion is that alas there just aren't many people in the world who are serious about storage performance.
by jauntywundrkind
3/15/2026 at 5:05:06 PM
Can Linux differentiate that different dimms are different? Or does it see it all as one big memory space still?by tayo42
3/15/2026 at 7:32:42 PM
Yes, Linux was aware of the difference via ACPI tables.by wmf
3/15/2026 at 3:56:13 PM
Optane was a victim of its own hype, such as “entirely new physics”, or “as fast as RAM, but persistent”. The reality felt like a failure afterwards even though it was still revolutionary, objectively speaking.by p-e-w