4/4/2026 at 7:51:42 PM
A good technical project, but honestly useless in like 90% of scenarios.You want to use an NVidia GPU for LLM ? just buy a basic PC on second hand (the GPU is the primary cost anyway), you want to use Mac for good amount of VRAM ? Buy a Mac.
With this proposed solution you have an half-backed system, the GPU is limited by the Thunderbolt port and you don’t have access to all of NVidia tool and library, and on other hand you have a system who doesn’t have the integration of native solution like MLX and a risk of breakage in future macOS update.
by MrArthegor
4/4/2026 at 7:58:10 PM
Chicken/egg. NVidia tooling is lacking surely in part because the hardware wasn’t usable on macOS until now. Now that it’s usable that might change.by afavour
4/5/2026 at 12:44:27 AM
Nvidia GPUs were usable on Intel Macs, but compatibility got worse over time, and Apple stopped making a Mac Pro with regular PCIe slots in 2013. People then got hopeful about eGPUs, but they have their own caveats on top of macOS only fully working with AMD cards. So I've gotten numb to any news about Mac + GPU. The answer was always to just get a non-Apple PC with PCIe slots instead of giving yourself hoops to jump through.by frollogaston
4/5/2026 at 2:41:36 AM
The 2019 Intel Mac Pro had PCIe slots. The Apple Silicon Mac Pro still has them as well, but they’re pretty much useless.by zoky
4/5/2026 at 3:49:01 AM
Until there is official support for Mac coming from nvidia, I don't think anything will happen.> the hardware wasn't usable on macOS
This eGPU thing is from a third-party if I understand correctly. I don't see why nvidia would get excited about that. If they cared about the platform, they would have released something already.
by fg137
4/5/2026 at 9:09:10 PM
The eGPU "thing" should work on anything that supports thunderbolt as it has native support for pcie.by 2III7
4/5/2026 at 10:02:03 PM
The point is that if nvidia cared about Mac platform they would have done something to make eGPU usable on Mac a long time ago.Even on Intel Macs using eGPU with nvidia cards was near impossible. nvidia just doesn't care about it after the breakdown of the two companies' relationship.
Whether a third party has created a signed driver or not doesn't matter much until there is more interest from the GPU maker. This barely moves the needle.
by fg137
4/4/2026 at 11:41:40 PM
Nvidia tooling like CUDA has worked on AArch64 UNIX-certified OSes since June of 2020: https://download.nvidia.com/XFree86/Linux-aarch64/The software stack has been ready for Apple Silicon for more than a half decade.
by bigyabai
4/6/2026 at 10:27:38 PM
"Nvidia." Not NVidia or nVidia, or the other ways. I feel that I can frequently figure out if someone is going to express a negative view about this company based only on whether they picked a weird way to write their name.by hank808
4/6/2026 at 11:35:21 PM
Their logo literally has a lowercase "N" in their name.by spartanatreyu
4/5/2026 at 11:58:10 AM
Thank you for opening my mind to a viewpoint I didn’t even know existed.Yes, for many scenarios this is "not even an academic exercise".
For a very select few applications this is Gold. Finally serious linear algebra crunch for the taking. (Without custom GPU tapeout.)
by dapperdrake
4/4/2026 at 9:40:32 PM
I misunderstood eGPU for virtual GPU. But I was wrong it means external GPU.by the_arun
4/5/2026 at 7:44:57 AM
> the GPU is limited by the Thunderbolt portNot everything is limited by the transfer speed to/from the GPU. LLM inference, for example.
by petters
4/5/2026 at 10:29:04 AM
Even with running ML experiments you'd mostly want to run them on rented out clusters anywayby MIA_Alive
4/5/2026 at 12:28:06 PM
the tooling is just the standard linux tooling inside the container, no? and thunderbolt is not a real limitationby throawayonthe
4/5/2026 at 2:05:37 PM
> GPU is limited by the Thunderbolt portI thought Thunderbolt was like pluggable PCI? The whole point was not to limit peripherals.
by nailer
4/5/2026 at 4:26:26 PM
There's more to peripheral limits than the protocol used. Thunderbolt connections offer higher latency and limits on bandwidth. Both, either, or neither of those things may be much of an actual problem (depending on the use case) but they are some examples of limits vs native PCIe.by zamadatix
4/5/2026 at 4:01:52 AM
[flagged]by tensor-fusion
4/5/2026 at 4:03:24 AM
> same PyTorch/CUDA calls, just intercepted by a stub library that forwards them over the local network.At that point you're making more work for yourself than debugging over SSH.
by bigyabai
4/5/2026 at 8:42:44 AM
[dead]by tensor-fusion