2/19/2026 at 3:11:15 AM
It's amazing to step back and look at how much of NVIDIA's success has come from unforeseen directions. For their original purpose of making graphics chips, the consumer vs pro divide was all about CAD support and optional OpenGL features that games didn't use. Programmable shaders were added for the sake of graphics rendering needs, but ended up spawning the whole GPGPU concept, which NVIDIA reacted to very well with the creation and promotion of CUDA. GPUs have FP64 capabilities in the first place because back when GPGPU first started happening, it was all about traditional HPC workloads like numerical solutions to PDEs.Fast forward several years, and the cryptocurrency craze drove up GPU prices for many years without even touching the floating-point capabilities. Now, FP64 is out because of ML, a field that's almost unrecognizable compared to where it was during the first few years of CUDA's existence.
NVIDIA has been very lucky over the course of their history, but have also done a great job of reacting to new workloads and use cases. But those shifts have definitely created some awkward moments where their existing strategies and roadmaps have been upturned.
by wtallis
2/19/2026 at 12:21:46 PM
Maybe some luck. But there’s also a principle that if you optimize the hell out of something and follow customer demand, there’s money to be made.Nvidia did a great job of avoiding the “oh we’re not in that market” trap that sunk Intel (phones, GPUs, efficient CPUs). Where Intel was too big and profitable to cultivate adjacent markets, Nvidia did everything they could to serve them and increase demand.
by brookst
2/19/2026 at 9:18:35 AM
I don't think it was luck. I think it was inevitable.They positioned the company on high performance computing, even if maybe they didn't think they were a HPC company, and something was bound to happen in that market because everybody was doing more and more computing. Then they executed well with the usual amount of greed that every company has.
The only risk for well positioned companies is being too ahead of times: being in the right market but not surviving long enough to see a killer app happen.
by pmontra
2/19/2026 at 3:25:36 AM
They were also bailed out by Sega.When they couldn't deliver the console GPU they promised for the Dreamcast (the NV2), Shoichiro Irimajiri, the Sega CEO at the time let them keep the cash in exchange for stock [0].
Without it Nvidia would have gone bankrupt months before Riva 128 changed things.
Sega console arm went bust not that it mattered. But they sold the stock for about $15mn (3x).
Had they held it, Jensen Huang ,estimated itd be worth a trillion[1]. Obviously Sega and especially it's console arm wasn't really into VC but...
My wet dream has always been what if Sega and Nvidia stuck together and we had a Sega tegra shield instead of a Nintendo switch? Or even what if Sega licensed itself to the Steam Deck? You can tell I'm a sega fan boy but I can't help that the Mega Drive was the first console I owned and loved!
[0] https://www.gamespot.com/articles/a-5-million-gift-from-sega...
by rustyhancock
2/19/2026 at 4:11:39 AM
Most people don't appreciate how many dead end applications NVIDIA explored before finding deep learning. It took a very long time, and it wasn't luck.by gdiamos
2/19/2026 at 5:03:33 AM
It was luck that a viable non-graphics application like deep learning existed which was well-suited to the architecture NVIDIA already had on hand. I certainly don't mean to diminish the work NVIDIA did to build their CUDA ecosystem, but without the benefit of hindsight I think it would have been very plausible that GPU architectures would not have been amenable to any use cases that would end up dwarfing graphics itself. There are plenty of architectures in the history of computing which never found a killer application, let alone three or four.by wtallis
2/19/2026 at 5:16:54 AM
Even that is arguably not lucky, it just followed a non-obvious trajectory. Graphics uses a fair amount of linear algebra, so people with large scale physical modeling needs (among many) became interested. To an extent the deep learning craze kicked off because of developments in computation on GPUs enabled economical training.by oivey
2/19/2026 at 7:12:04 AM
Nvidia started their GPGPU adventure by acquiring a physics engine and porting it over to run on their GPUs. Supporting linear algebra operations was pretty much the goal from the start.by imtringued
2/19/2026 at 10:16:18 PM
That physics engine is an example of a dead-end.by 0x457
2/19/2026 at 10:36:49 AM
They were also full of lies when they have started their GPGPU adventure (like also today).For a few years they have repeated continuously how GPGPU can provide about 100 times more speed than CPUs.
This has always been false. GPUs are really much faster, but their performance per watt has oscillated during most of the time around 3 times and sometimes up to 4 times greater in comparison with CPUs. This is impressive, but very far from the "100" factor originally claimed by NVIDIA.
Far more annoying than the exaggerated performance claims, is how the NVIDIA CEO was talking during the first GPGPU years about how their GPUs will cause a democratization of computing, giving access for everyone to high-throughput computing.
After a few years, these optimistic prophecies have stopped and NVIDIA has promptly removed FP64 support from their price-acceptable GPUs.
A few years later, AMD has followed the NVIDIA example.
Now, only Intel has made an attempt to revive GPUs as "GPGPUs", but there seems to be little conviction behind this attempt, as they do not even advertise the capabilities of their GPUs. If Intel will also abandon this market, than the "general-purpose" in GPGPUs will really become dead.
by adrian_b
2/19/2026 at 10:20:09 PM
> A few years later, AMD has followed the NVIDIA example.When bitcoin was still profitable to mine on GPUs, AMD's performed better due to not being segmented like NVIDIA cards. It didn't help AMD, not that it matters. AMD started segmenting because they couldn't make a competitive card at a competitive price for the consumer market.
by 0x457
2/19/2026 at 11:37:46 AM
GPGPU is doing better than ever.Sure FP64 is a problem and not always available in the capacity people would like it to be, but there are a lot of things you can do just fine with FP32 and all of that research and engineering absolutely is done on GPU.
The AI-craze also made all of it much more accessible. You don't need advanced C++ knowledge anymore to write and run a CUDA project anymore. You can just take Pytorch, JAX, CuPy or whatnot and accelerate your numpy code by an order of magnitude or two. Basically everyone in STEM is using Python these days and the scientific stack works beautifully with nvidia GPUs. Guess which chip maker will benefit if any of that research turns out to be a breakout success in need of more compute?
by KeplerBoy
2/19/2026 at 5:08:06 PM
> GPGPU can provide about 100 times more speed than CPUsOk. You're talking about performance.
> their performance per watt has oscillated during most of the time around 3 times and sometimes up to 4 times greater in comparison with CPUs
Now you're talking about perf/W.
> This is impressive, but very far from the "100" factor originally claimed by NVIDIA.
That's because you're comparing apples to apples per apple cart.
by tverbeure
2/19/2026 at 8:19:20 PM
For determining the maximum performance achievable, the performance per watt is what matters, as the power consumption will always be limited by cooling and by the available power supply.Even if we interpret the NVIDIA claim as referring to the performance available in a desktop, the GPU cards had power consumptions at most double in comparison with CPUs. Even with this extra factor there has been more than an order of magnitude between reality and the NVIDIA claims.
Moreover I am not sure whether around 2010 and before that, when these NVIDIA claims were frequent, the power permissible for PCIe cards had already reached 300 W, or it was still lower.
In any case the "100" factor claimed by NVIDIA was supported by flawed benchmarks, which compared an optimized parallel CUDA implementation of some algorithm with a naive sequential implementation on the CPU, instead of comparing it with an optimized multithreaded SIMD implementation on that CPU.
by adrian_b
2/19/2026 at 9:13:50 PM
At the time, desktop power consumption was never a true limiter. Even for the notorious GTX 480, TDP was only 250 W.That aside, it still didn't make sense to compare apples to apples per apple cart...
by tverbeure
2/19/2026 at 5:54:52 PM
There's something of a feedback loop here, in that the reason that transformers and attention won over all the other forms of AI/ML is that they worked very well on the architecture that NVIDIA had already built, so you could scale your model size very dramatically just by throwing more commodity hardware at it.by mandevil
2/19/2026 at 6:30:45 AM
It was definitely luck, greg. And Nvidia didn't invent deep learning, deep learning found nvidias investment in CUDA.by fooblaster
2/19/2026 at 7:07:41 AM
I remember it differently. CUDA was built with the intention of finding/enabling something like deep learning. I thought it was unrealistic too and took it on faith in people more experienced than me, until I saw deep learning work.Some of the near misses I remember included bitcoin. Many of the other attempts didn't ever see the light of day.
Luck in english often means success by chance rather than one's own efforts or abilities. I don't think that characterizes CUDA. I think it was eventual success in the face of extreme difficulty, many failures, and sacrifices. In hindsight, I'm still surprised that Jensen kept funding it as long as he did. I've never met a leader since who I think would have done that.
by gdiamos
2/19/2026 at 8:25:19 AM
Nobody cared about deep learning back in 2007, when CUDA released. It wasn't until the 2012 AlexNet milestone that deep neural nets start to become en vogue again.by cherryteastain
2/19/2026 at 8:38:42 AM
I clearly remember Cuda being made for HPC and scientific applications. They added actual operations for neural nets years after it was already a boom. Both instances were reactions, people already used graphics shaders for scientific purposes and cuda for neural nets, in both cases Nvidia was like oh cool money to be made.by gbin
2/19/2026 at 4:20:47 PM
Parallel computing goes back to the 1960s (at least). I've been involved in it since the 1980s. Generally you don't create an architecture and associated tooling for some specific application. The people creating the architecture only have a sketchy understanding of application areas and their needs. What you do is have a bright idea/pet peeve. Then you get someone to fund building that thing you imagined. Then marketing people scratch their heads as to who they might sell it to. It's at that point you observed "this thing was made for HPC, etc" because the marketing folks put out stories and material that said so. But really it wasn't. And as you note, it wasn't made for ML or AI either. That said in the 1980s we had "neural networks" as a potential target market for parallel processing chips so it's aways there as a possibility.by dboreham
2/19/2026 at 8:38:06 PM
CUDA was profitable very early because of oil and gas code, like reverse time migration and the like. There was no act of incredible foresight from jensen. In fact, I recall him threatening to kill the program if large projects that made it not profitable failed, like the Titan super computer at oak ridge.by fooblaster
2/19/2026 at 12:23:52 PM
So it could just as easily have been Intel or AMD, despite them not having CUDA or any interest in that market? Pure luck that the one large company that invested to support a market reaped most of the benefits?by brookst
2/19/2026 at 5:06:22 AM
It was luck, but that doesn't mean they didn't work very hard too.Luck is when preparation meets opportunity.
by MengerSponge
2/19/2026 at 3:54:42 PM
The counter question is: why have AMD been so bad by comparison?by pjc50
2/19/2026 at 5:17:59 PM
Because GPUs require a lot on the software side, and AMD sucks at software. They are a CPU company that bought a GPU company. ATI should have been left alone.by Eisenstein
2/19/2026 at 3:40:32 AM
The whole GPU history is off and being driven by finance bros as well. Everyone believes Nvidia kicked off the GPU AI craze when Ilya Sutskever cleaned up on AlexNet with an Nvidia GPU back in 2012, or when Andrew Ng and team at Stanford published their "Large Scale Deep Unsupervised Learning using Graphics Processors" in 2009, but in 2004, a couple of Korean researches were the first to implement neural networks on a GPU, using ATI Radeons (now AMD): https://www.sciencedirect.com/science/article/abs/pii/S00313...I remember ATI and Nvidia were neck-and-neck to launch the first GPUs around 2000. Just so much happening so fast.
I'd also say Nvidia had the benefit of AMD going after and focusing on Intel both at the server level as well as the integrated laptop processors, which was the reason they bought ATI.
by readitalready