alt.hn

5/14/2026 at 3:47:31 PM

RTX 5090 and M4 MacBook Air: Can It Game?

https://scottjg.com/posts/2026-05-05-egpu-mac-gaming/

by allenleee

5/14/2026 at 4:41:33 PM

I have been bothering the VM team for years for VM GPU pass through. I worked on the Apple Silicon Mac Pro and it would have made way more sense if you could run a linux VM and pass through the GPU that goes inside the case!

Sadly, as you can tell, they have not taken me up on my requests. Awesome that other people got it working!

by matthewfcarlson

5/14/2026 at 5:34:01 PM

It looks like the pass through part here was implemented using standard DriverKit interfaces, if I'm not mistaken. That is, the PCIe BAR can already be mapped from the user-space, without any extra modifications to macOS. It's just a matter of VMMs, such as QEMU, adopting this interface in addition to Linux VFIO and the like (unless you're talking about Virtualization.framework, which is kind of a VMM of its own).

What exactly do you feel macOS is missing?

by m132

5/14/2026 at 5:42:51 PM

I’m not very familiar with the specifics of pass through but IIUC only being able to map 1.5gb of active DMA buffers at a time is pretty limiting.

by anp

5/15/2026 at 12:23:42 AM

Isn't driverkit essentially a separate user space stack compared to regular code? I remember seeing the driverkit specific dyld caches in macos root partition images that included their own copies of everything down to libsystem. Getting driverkit code to run in the same process as normal user code seems like it'd be quite an uphill battle.

Presumably with the right entitlements you can just hit the same (presumably IOKit) syscalls that driverkit does. But that's an extra layer of reverse engineering, and you're not really using driverkit anymore.

by monocasa

5/15/2026 at 12:55:56 AM

it is a separate stack, but that probably doesn't matter much. a user process (in my case, qemu) can communicate with a driverkit driver. the user process can also map memory through the driver, which is how this pci passthrough system works.

i don't think the issues with the project really are specific to driverkit.

by scottjg

5/14/2026 at 8:30:11 PM

>> This project requires a special entitlement from Apple. I’ve requested it, and heard they may be open to granting it, but I have not yet heard back, and I’m told that the wait time could be months.

> I have been bothering the VM team for years for VM GPU pass through.

Good luck. I'm sure they're keen on giving people access to this so that people can spend their money on NVIDIA GPUs instead of buying more expensive Macs. :)

Would of course be awesome, but I'd be very surprised if it happened.

by mikae1

5/14/2026 at 11:14:05 PM

There isn't a more expensive Mac option to buy if what you're after is a gaming GPU. It's more likely that the VM team sees this as a very low benefit ticket to pursue given the tiny segment of Mac gamers hoping to improve their options with a Linux VM for gaming.

(Meanwhile, I'm recompiling Wine to see if I can patch it to address an issue that was hotfixed in Proton two weeks ago but isn't in a CrossOver build yet, so yeah, there's maybe some arguments to be made here that I'd be a potential beneficiary. If I weren't too cheap to spring for an eGPU in today's market, anyway.)

by codebje

5/14/2026 at 11:36:40 PM

The entitlement in question is the standard `com.apple.developer.driverkit.transport.pci` [0], required for anything that touches the PCIe bus [1]. Apple is generally restrictive with how much third-party applications can do on machines with SIP/"full security", so I'm not exactly surprised. It's not an Apple-private entitlement, however.

The VFIO-style driver made by the author of this also appears generic enough to support all kinds of PCIe, not just GPUs. Apple might find a way to weasel out of this ("hey, this is for hardware companies and you don't seem to be affiliated with one", "your driver requests too broad access", etc.) if there really is a conflict of interest, but so far, there's a chance it will just get rubber-stamped.

I can see them rejecting it for legitimate reasons, though, at least as far as "legitimate" with Apple goes. This driver is essentially a thin layer over PCIDriverKit, exposing all functionality that's supposed to be behind the entitlement to arbitrary applications, in similar fashion to WinRing0. They probably didn't come up with all this bureaucracy only to sign something like that in the end. We'll see what happens.

[0] https://github.com/scottjg/qemu-vfio-apple/blob/84ecdcf5db6b...

[1] https://developer.apple.com/documentation/pcidriverkit/creat...

by m132

5/14/2026 at 6:45:06 PM

two semi interesting things to note around this:

1. Virtualization.framework seems to support some form of GPU passthrough from the host (granted, not eGPU - it's for the integrated GPU). I think the primary use case is having macOS guests get acceleration, while still sharing GPU time with the host. There is also a patch that recently hit QEMU mainline that supports using the "venus server" with virtio-gpu to support a similar functionality for Linux guests under Hypervisor.framework.

2. Apple internally has some kind of PCI Passthrough support available in Virtualization.framework. It seems like the code is shipped to customers in the framework, but it relies on some kind of kext or kernel component that isn't shipped in retail macOS. I can't say if that's intended to ever be released to customers, but clearly someone at Apple has thought about this the feature.

by scottjg

5/14/2026 at 11:56:13 PM

I experimented with booting Arm macOS 14-26 in QEMU a while back, building on the work of Alexander Graf for macOS 12-13, and reverse-engineered substantial parts of Hypervisor.framework, the in-kernel hypervisor, and a bit of Virtualization.framework. Got newer versions of Sequoia to boot past the log in screen, with GPU acceleration too.

Unless there's another method I missed, the internal GPU "pass through" of Virtualization.framework you're thinking of might actually just be paravirualization, at least that's what the name suggests. It's implemented in the public ParavirtualizedGraphics framework [0], albeit for PG on Arm macOS, the relevant interfaces are private [1]. I haven't looked that deep into it per se, but, fixing the bugs around it, I've run into a few clues suggesting that it's just a command stream + shared memory being passed around. It also uses its own generic driver on the guest side.

Great job, by the way! Love how authors of pieces like this casually come here to comment :)

[0] https://developer.apple.com/documentation/paravirtualizedgra...

[1] https://github.com/qemu/qemu/blob/edcc429e9e41a8e0e415dcdab6...

by m132

5/15/2026 at 12:45:07 AM

thanks!

there also appears to be a generic pci passthrough path. we were discussing it on the qemu-devel list: https://lore.kernel.org/qemu-devel/C35B5E97-73F2-4A60-951B-B...

by scottjg

5/15/2026 at 12:47:36 AM

Oh, thanks for letting me know, and for the upstreaming work too! I might join the party once I find some more time :)

by m132

5/14/2026 at 5:46:26 PM

What are the chances there will be another Mac Pro in the future?

Will Apple ever make a computer that makes Siracusa happy? (and do you have the "Believe" shirt?)

by caycep

5/14/2026 at 6:31:53 PM

Never, a couple of years ago Apple gave up on the server market, that is why having Swift on Linux is so relevant for app developers.

Now they gave up on the workstation market that really enjoys their slots for all myriad of cards.

Having a thunderbolt cable salad is only for those that miss external extensions from 8 and 16 bit home computer days.

Which is clearly what Apple is nowadays focused, if you look back at the vertical integrations before the PC clones market took off.

So now if you really need a workstation, it is either Windows, or one of those systems sold with Red-Hat Enterprise/Ubuntu from IBM, Dell , HP.

by pjmlp

5/14/2026 at 7:27:57 PM

If you want a workstation, you are probably better off building it yourself, or having your local computer store do it. The primary exceptions are AMD strix halos or the nvidia dgx spark.

I haven’t seen a non-laughable workstation config from the big vendors since the dot com bubble. Presumably they exist, I guess?

by hedora

5/14/2026 at 8:05:48 PM

DISCLAIMER: Only speaking for myself, not employers or affiliates.

I've been pretty darn happy with the Puget Systems custom workstation I ordered last year before the memory craze started (especially since it has 192GiB of DDR5).

I also ordered another family member a custom "Tiki" system from Falcon Northwest and that has also been quite excellent from what I've seen and they've told me.

Now is obviously not the most economical time to order a new system, but when it is appropriate (and for what it's worth) I think those are two great system builders.

by binarycrusader

5/14/2026 at 8:55:18 PM

I wouldn’t count them as a big vendor, but I’ve only heard good things. Local shops around here charge like $99 to put a machine together, install an OS and run burn in testing. You get more choice than an outfit like puget, but less carefully tested part / cooling selection, etc.

The last I checked, the really big players tended to add value add gimmicks (water cooling is a common one, custom psu form factors are another) with reliability / compatibility issues. That’s the tier to avoid, not the Puget systems of the world.

by hedora

5/15/2026 at 5:23:03 AM

I picked both Puget Systems and Falcon Northwest because for the most part, both focus on pre-tested off-the-shelf parts with good reliability data from their own servicing.

My Puget Systems workstation for example has a simple AIO for cooling with some Noctua fans and a Fractal Design 7 XL full tower case.

The Tiki system I ordered for a family member from Falcon Northwest does have a custom case, but almost everything else is fairly standard inside. The super small form factor was important to them.

Could I have built either of these systems myself? Absolutely -- I've done that for at least prior 20 years or so, and I've built dozens for employers, but it sure was nice to buy one that just worked this time instead of having to having to fiddle with memory sticks or find exactly the right bios settings for stability, etc.

I'm well aware of the premium I paid but I can honestly say it has been incredibly nice to have a workstation that just works without having to fiddle with bios updates or hardware. I also don't really have the time to spare so I was entirely willing to trade funds for time.

by binarycrusader

5/14/2026 at 9:37:23 PM

Non-standard parts are not about value-adding, they're about cost-cutting if you're feeling charitable, and about forcing vendor lock-in if you're not.

by fluoridation

5/14/2026 at 7:35:59 PM

Yes they exist, and business aren't building PCs from parts themselves.

by pjmlp

5/14/2026 at 8:01:25 PM

They get features that us plebs buying retail don't get, at prices the vast majority of us wouldn't pay if it were our own cash.

by esseph

5/14/2026 at 8:10:57 PM

Just because you're cheap and don't value your time, doesn't mean they don't exist.

by fragmede

5/14/2026 at 8:57:02 PM

IMHO - extremely little.

It is too inefficient to design a machine which _might_ have two GPU and a flock of additional drives installed into it. It just makes sense to instead design around having independent hardware in its own case, which can meet its own power/cooling needs. This has been a design goal since the trashcan Mac.

Having a PCIe bus increases bandwidth and reduces latency, but once you account for eGPU and for people who would be happy building custom solutions on platforms other than macOS, there's likely not enough identified market for a modular design.

by dwaite

5/14/2026 at 6:18:42 PM

[flagged]

by kahrl

5/14/2026 at 4:59:24 PM

It feels like half the problem in this blog post is dealing with memory access issues induced by QEMU and the VM boundary... it's probably something dumb I'm missing, but if you boot up Ubuntu in Docker, wouldn't the NVIDIA drivers still load? And then you wouldn't have to fight Apple about the memory management because OSX would still own the memory?

by crdrost

5/14/2026 at 5:04:12 PM

> but if you boot up Ubuntu in Docker, wouldn't the NVIDIA drivers still load?

Even if the drivers loaded, they can't talk to the GPU from within docker (unless one implements PCI passthrough). MacOS owns the PCI bus in this scenario.

by swiftcoder

5/14/2026 at 5:59:28 PM

docker on macos runs in a linux vm

by smw

5/14/2026 at 5:02:41 PM

The driver wants to own the memory is the problem.

by jmalicki

5/14/2026 at 5:02:42 PM

I still believe the lack of NVIDIA GPU support in the Mac Pro will go down as one of the greatest missed opportunities in tech.

Anyway, the Mac Pro is dead now. There's only so much sales audio and video professionals can provide.

by brcmthrowaway

5/14/2026 at 9:13:36 PM

There was some bad history between Apple and Nvidia. Perhaps with a new generation of leadership at Apple things might change.

https://www.reddit.com/r/hardware/comments/1hmgmuf/apples_hi...

by runjake

5/15/2026 at 1:04:51 AM

I wasn't in the room when it happened, but this is very different than the story told internally about why Apple became allergic to Nvidia.

Arguably more petty. SJ has been dead for almost 15 year now, I imagine the C-suite might get over it at some point.

by mercutio2

5/15/2026 at 9:02:39 AM

> Arguably more petty

I can believe it. IIRC Jobs also snubbed ATI once after they leaked the GPUs going in the next PowerMac model.

by kalleboo

5/15/2026 at 12:42:12 AM

Maybe with Tim and Jensen going on holiday together in China, the relationship might be healed somewhat.

Things have moved on since the days where GPUs in Macs were a priority.

But then the AI race has changed things. So who knows - maybe we will one day see official eGPU support from Apple and new drivers from nVidia. Wouldn't put on money on it though....

by firecall

5/14/2026 at 5:22:13 PM

> I still believe the lack of NVIDIA GPU support in the Mac Pro will go down as one of the greatest missed opportunities in tech.

I don’t know about that. Apple supported some full size GPUs in past product lines and the number of users was very small. Granted, LLMs change that demand but the audience for Mac Pro buyers who would use a full-size GPU that is impossible to obtain is almost nothing compared to their laptop sales.

by Aurornis

5/14/2026 at 5:26:08 PM

The audience for Mac Pro buyers is almost nothing, full stop. It failed to find a niche, and now Apple is getting rid of it: https://www.macrumors.com/2026/03/26/apple-discontinues-mac-...

Part of the reason the new Mac Pro failed to find an audience can definitely be blamed on macOS' hostility to third party hardware. Who knows what Apple would be worth if they beat Nvidia's Grace CPU to the datacenter market. It was certainly their opportunity.

by bigyabai

5/14/2026 at 6:36:38 PM

Yes, because they already moved on to workstations powered by either Windows or Red-Hat Linux/Ubuntu.

The only ones left were people like John Siracusa that still hoped to the very last minute, that Apple would change their mind.

by pjmlp

5/14/2026 at 6:00:35 PM

True, they could do any number of things. But a datacenter play would appear quite random to investors and their core audience. Broadcom + Nvidia however...

by brcmthrowaway

5/14/2026 at 6:25:28 PM

Apple seems to be content to sell shovels in the AI gold rush.

Admittedly… what’s on my desk? A MacBook M4 Air, a Mac Studio, and there’s an x86 iMac in the corner.

What goes in the travel bag? A MacBook Pro or the Air.

Every time I look at buying something else the math doesn’t add up.

The 5090 sits in a commodity PC chassis. It’s not like I need a model running on my own computer.

by trollbridge

5/14/2026 at 6:35:00 PM

The missed opportunity is like with server market, now giving the workstation market to Windows and Linux.

It isn't only audio and video.

by pjmlp

5/14/2026 at 5:25:24 PM

I guess that little problem with the Nvidia chips overheating in the MacBook Pro didn’t give Apple a lot of confidence

by jbverschoor

5/14/2026 at 5:42:12 PM

The Mac Pro isn't a Macbook Pro. It has socketed PCI slots and should be able to support the user's hardware in macOS' software, regardless of how Apple feels.

by bigyabai

5/14/2026 at 9:15:54 PM

Seriously, the decades-long grudge against Nvidia that we always hear about seems like the most ridiculous and immature business move. I expect that kind of thing from an individual, you know, “I NEVER fly American Airlines!!!” but in business, such a permanent ban on one of the two players in a market, the leader no less. I don’t get it.

Maybe it doesn’t matter that much now because they’ve literally exited all the businesses where an external GPU is going to matter. But sticking with AMD all that time out of spite is just wild.

by xp84

5/14/2026 at 9:25:28 PM

Audio and Video professionals jumped ship around the time Apple canned all the pro software

by Melatonic

5/14/2026 at 10:40:32 PM

In your view why have they refused to implement a "Linux VM and pass through the GPU that goes inside the case?"

by SilentM68

5/14/2026 at 5:17:37 PM

Excellent article.

The game benchmarks are fun but the LLM improvements are where this gets really interesting for practical use. I love Apple platforms as an approachable way to run local models with a lot of RAM, but their relatively slow prompt processing speed is often overlooked.

> Here you can see the big issue with Macs: the prompt processing (aka “prefill”) speed. It just gets worse and worse, the longer the prompt gets. At a 4K-token prompt, which doesn’t seem very long, it takes 17 seconds for the M4 MacBook Air to parse before we even start generating a response. Meanwhile, if you strap the eGPU to it, it’ll only take 150ms. It’s 120x faster.

The prefill problem goes unnoticed when you’re playing around with the LLM with small chats. When you start trying to use it for bigger work pieces the compute limit becomes a bottleneck.

The time to first token (TTFT) charts don’t look bad until you notice that they had to be shown on a logarithmic scale because the Mac platforms were so much slower than full GPU compute.

by Aurornis

5/14/2026 at 5:27:02 PM

I'm curious and not an expert here, do you know why the TTFT is so much worse on Mac? To elaborate, the article just says that this step is compute bound, but I'm wondering whether it is just that simple or if it might also be less optimised in MLX?

by superlopuh

5/14/2026 at 5:40:50 PM

Prefill (prompt processing) is compute bound doing large matrix operations. Token generation (aka tokens/s) is memory bandwidth bound.

The RTX 5090 has an incredible amount of compute performance for matrix operations and a lot of memory bandwidth. The Apple Silicon parts have unusually high memory bandwidth for general purpose compute chips, which is why they can generate tokens so fast. Their raw matrix compute performance is amazing for their power envelope but not nearly as fast as a dedicated GPU consuming 400-500W.

Apple added tensor cores on the M5 generation which help with those matrix operations, which is why the M5 performs so much better than the M4 Max in that article.

Dedicate GPUs like the RTX 5090 are in another league, though.

You can see the divergence in the high resolution gaming benchmarks, too. Once he starts benchmarking at 4K or 6K where the CPU emulation stops being a bottleneck, the raw compute of the 5090 completely crushes any of the Apple Silicon GPUs.

by Aurornis

5/14/2026 at 5:44:41 PM

Apple GPUs didn’t have tensor cores until the M5 (aka “a neural accelerator in each core”) and in the article’s charts that a M5 Pro significantly beats a M4 Max (while in other workloads it would be much smaller since Pro is ~1/2 Max).

EDIT: since Aurornis beat me by 3 minutes, I’ll add another interesting tidbit instead :)

NVIDIA tensor cores on consumer GPUs are massively less powerful per SM core than on their datacenter counterparts-parts (which also makes them easier to get to peak efficiency on consumer GPUs because the rest of the pipeline is much more quickly a bottleneck as per Amdahl’s Law).

This is potentially changing with Vera Rubin CPX which looks an awful lot like a RTX 5090 replacement but with the full-blown datacenter tensor cores (that won’t be available unless you pay for the datacenter SKU) - so it will have very high TFLOPS relative to its bandwidth.

The target market for the CPX is exactly this: prefill and Time To First Token. You can basically just throw compute at the problem for (parts of) prefill performance (but it won’t help anything else past a certain point) and the 5090/M5 are nowhere near that limit.

So the design choice for NVIDIA/Apple/etc of how much silicon to spend for this on consumer GPUs is mostly dictated by economics and how much they can reuse the same chips for the different markets.

by ademeure

5/14/2026 at 9:29:13 PM

Does that include stuff like the Pro Blackwell 6000? Or are the tensor cores as good per SM comparably? They perform quite well on many tests

by Melatonic

5/14/2026 at 9:58:09 PM

Pro Blackwell 6000 is just a 5090 with more VRAM. It does not have the tcgen05 (5th gen tensor core) instructions despite the "5th gen tensor core) branding and thus do not support any optimized Blackwell (sm100) kernels.

Every Blackwell card other than the (G)B100, (G)B200, (G)B300 and Jetson Thor, use the Ampere tensor core instruction (mma.sync) but with fp4/6/8 added on. Beyond that the DGX Spark (which is advertised as having the same architecture as B200) has especially weak (not tcgen05) tensor cores that have a very narrow operating window and low utilization.

by aviinuo

5/14/2026 at 5:41:54 PM

> I'm curious and not an expert here, do you know why the TTFT is so much worse on Mac?

because the GPUs aren't as fantastic as everyone assumes?

> might also be less optimised in MLX?

prefill has gotta be one of the most optimized paths in MLX...

by mathisfun123

5/14/2026 at 8:34:57 PM

No you don't understand, on Apple Silicon my CPU has comparable memory bandwidth to a $400 Pascal-era GPU. With the unified memory architecture, that means my iGPU gets 2016-levels of DDR transfer speed with none of the upsides of CUDA. It's the most cutting-edge hardware ever put in a personal computer, without a doubt.

by bigyabai

5/15/2026 at 8:19:24 AM

Please show me on the 2016-era $400 Pascal GPU where you can install the 256 GB of VRAM.

by fgfarben

5/14/2026 at 6:21:09 PM

It feels pedantic to point it out, but it’s actually 113x faster.

Seeing the author present their results like this give off the impression that they’re biased, which I am sure they aren’t.

by Moosdijk

5/14/2026 at 7:15:00 PM

the exact numbers in the graph are 17019ms vs 142ms. so you're right, it's not 120x, it's 119.85x.

by scottjg

5/14/2026 at 7:56:16 PM

That explains it. Thanks!

by Moosdijk

5/14/2026 at 8:36:13 PM

Use oMLX. Qwen3.6 - 300tok/s PP, 30tok/s TG.

by brcmthrowaway

5/15/2026 at 1:07:06 AM

This is The Way.

by mercutio2

5/14/2026 at 6:12:21 PM

> Because OpenGL is not well-supported anymore on macOS, the game is completely unplayable there, even with CrossOver. Ironically, it plays totally fine on a Windows PC, but this is a game you literally can’t play on Mac without this eGPU setup.

I understand that this is true it seems that Doom does support Vulkan but you would need to add VK_NV_glsl_shader to MoltenVK. Probably much less work than what went into hanging an RTX 5090 off of a M4. Still, kudos to the scott and the local AI Inference speeds are pretty cool. What a crazy project! <applause>

by djmips

5/15/2026 at 1:05:57 AM

interesting. that might be a fun intro project to MoltenVK. I hadn't dug into what was missing for Doom. I thought maybe the issue was that the intro/menu always ran in opengl mode or something. If it's just one missing op, that's way easier.

by scottjg

5/14/2026 at 4:55:04 PM

This is pretty impressive. My impression was that eGPUs simply do not work with Apple Silicon.

(EDIT: Apple agrees with my impression. “To use an eGPU, a Mac with an Intel processor is required.” And, on top of that, the officially supported eGPUs were all AMD not NVIDIA. https://support.apple.com/en-us/102363)

by divbzero

5/14/2026 at 9:33:24 PM

This is not using an eGPU with macOS, ie you can't run your chrome on macOS with its GPU acceleration coming from this eGPU. This is tunneling that eGPU to a Linux VM.

by steelbrain

5/14/2026 at 8:09:27 PM

I came into the post thinking it would be running a VM through the slow tinygrad driver... but this is much, much better.

It'd be amazing if Apple would provide better support, and allow more than that 1.5 GB window to make this easier. Arm overall has some quirks with PCIe devices, but at least in Linux, it's gotten so much easier since most modern drivers treat arm64 as a first class citizen.

by geerlingguy

5/15/2026 at 12:52:40 AM

i don't know for sure, but i suspect what makes the tinygrad stuff slow isn't the macos host driver itself. i think they're doing something very similar to what i'm doing, which is just mapping the PCI BARs to userspace, then they have a bunch of python code that drives the GPU.

this is only speculation, but i think the big thing that makes tinygrad slow is that the tinygrad inference engine has not really been optimized much for all these open LLM models. probably most of the work has gone towards optimizing the stack for george's self-driving hardware company. since you can't just run the existing CUDA kernels on their engine, that makes things a lot tougher, engineering-wise.

i am actually curious if my project could share a macos host driver with them. i think it would need some changes, but it seems like there's a lot of overlap

by scottjg

5/14/2026 at 4:24:46 PM

This is proper mad science, love it

by swiftcoder

5/15/2026 at 7:14:04 AM

The gaming part is fun, so does the local AI numbers. As fast prefill changes the whole experience, it makes local inference feel practical

by Riany

5/14/2026 at 4:31:06 PM

Nicely done! Glad to see real hacking is still alive in the age of AI.

by delbronski

5/14/2026 at 9:36:24 PM

I love how its listed as "RTX 5090 Discrete' Sir that is anything but discrete!

by bilekas

5/15/2026 at 1:08:30 AM

i admit, you got me chuckling with that one.

by scottjg

5/14/2026 at 6:20:33 PM

Wait, this is incredible. I have a spare 5090 lying around and run a claw-like on my M4 Mini. Just plugging it into some sort of 3D print frame for stability and plugging it into the TB port might get me a pretty viable tool for local inference. Would need something neat to ensure the power etc. is well fed.

The problem is `max-num-seqs` and `max-model-len` fight each other, and unless you're in the pure single-client mode you'll need multiple slots so to speak.

by arjie

5/14/2026 at 8:18:37 PM

If you get too busy to take advantage, I'll take that spare 5090 off your hands, free of charge :)

by pat_space

5/14/2026 at 4:29:14 PM

This seems pretty useful for AI inference if it can pass Apple approval. I've wanted to use my Nvidia GPUs with a Mac Mini, this would enable it to run CUDA directly. Very cool!

by coder68

5/14/2026 at 4:20:57 PM

I'm guessing the x86 emu is cause Windows games are rarely built for ARM, right? Was kinda curious how an ARM VM would fare. Anyway awesome article.

by frollogaston

5/14/2026 at 4:27:55 PM

Yes. Valve has done a ton of work here because it's required to be able to run x86 games on a Steam Frame which has an ARM cpu.

by hparadiz

5/14/2026 at 4:44:07 PM

Steam deck runs a full x86-64 AMD APU. The work valve has done for that was to get Windows games to run seamlessly on Linux.

Hopefully in 2026 the Valve Index VR headset which is ARM (Qualcomm?) we get what you're talking about here - basically proton for Win32/64 to Linux ARM64.

Side note that Windows on ARM isn't bad just that its priced out of its league and cooling is awful for gaming on current laptops. The only issue I had was OpenGL needing some obscure GL on DirectX thing for Maya3D to get games to work.

by hypercube33

5/14/2026 at 5:00:23 PM

To keep the chain of Cunningham's Law going, Valve's 2026 headset is called the Steam Frame, not the Index (which came out in 2019).

But Valve's ARM efforts even mean that Android devices can play some (mostly less graphically intensive) Steam games. That makes me very excited about the prospects for the future of gaming handhelds.

by delecti

5/14/2026 at 4:44:48 PM

As sibling pointed out, the Steamdeck basically runs a Ryzen 3 7335U which is x86.

by sva_

5/14/2026 at 4:29:48 PM

The Steam Deck is pure x86, it's not an ARM-based CPU. The Steam Frame might be what you're thinking of.

by bigyabai

5/14/2026 at 4:56:02 PM

You're right. I was thinking of what I was reading about the Steam Frame

by hparadiz

5/14/2026 at 4:39:37 PM

> As much as I hate to admit it, step one in most of my projects now is to ask AI about it. Maybe it’ll tell me something I don’t know.

Or, more likely, it will tell you something it doesn't know.

Reminds me of yesterday, when I was arguing with ChatGPT that the 5070TI was an actual video card. It kept trying to correct me by saying I must have meant a 4070ti, since no such 5070ti card exists.

by mywittyname

5/14/2026 at 4:52:43 PM

Or, it will acknowledge that it made a mistake and continue to make the same mistake again.

I asked Claude to generate an HTML page about PowerShell 7. It gave me a page saying 7.4 was the latest LTS release. I corrected it with links showing 7.6 was released in March and asked it to regenerate with the latest information.

It generated basically the same page with the same claim that 7.4 was the latest release.

by collabs

5/14/2026 at 5:06:10 PM

> Or, it will acknowledge that it made a mistake and continue to make the same mistake again.

People do this too though. At least the AI generally tries to follow instructions that you give it even when you are lacking clarity in the details.

I feel like it's similar to the self-driving car problem. The car could have 99.9999% reliability, drive much better and safer than a human, yet folks will still freak out about a single mistake that's made even though you have actual humans today driving the wrong way down the highway, crashing in to buildings, drunk driving, stealing cars, and all sorts of other just absolutely stupid things.

We need to move away from this idea that because it's an AI system it should give you perfect responses. It's not a deterministic system and it can be wrong, though it should get better over time. Your Google search results are wrong all the time too. The NYT writes things that are factually incorrect. Why do we have such a high standard for these models when we don't apply them elsewhere?

by ericmay

5/14/2026 at 5:17:06 PM

>I corrected it with links

it should be reasonably expected that you can give a source and fix an error in the AI output.

I would even go as far as to say if a human directly told the AI "no, use 7.6 as the latest version", the AI should absolutely follow direct instructions no matter what it thinks is true. What if this human was working on a slide about the upcoming release of 7.6 that has no public documentation?

by bryceacc

5/14/2026 at 9:37:28 PM

I see a lot of angry responses in your replies, but I do think you have a good point. It seems like those arguing with you are mostly vigorously opposing a strawman: the idea that AI is perfect and that trusting AI to be perfect is the right move. Only crazy people think that, though.

For me, I ask AI questions about taxes and my health all the time. In the case of taxes, getting a basic handle on the relevant tax law is made 1000 times easier. I can always refer directly to the IRS publications to verify, once I know what I’m looking for.

For health, frankly, it would be impractical for me to ever get as much useful information from doctors as what I can easily get from AI. Four years ago, I would have a bunch of health questions and simply never know the answers to any of them because I would have nobody to ask. Now I get them all answered, and if I were to be suggested to actually do anything that sounded even slightly risky I’d go to the doctor, armed with much more context than I had before, to verify it.

by xp84

5/14/2026 at 5:42:49 PM

> Your Google search results are wrong all the time too. The NYT writes things that are factually incorrect.

This is also very bad and people complain about these things all the fucking time.

> Why do we have such a high standard for these models

Because Altman and Amodei are defrauding investors out of hundreds of billions of dollars on the promise that they will replace the entire workforce. Of course people are going to point out the emperor has no clothes when half of our society is engaged in mass hysteria worshipping these fucking things as the next industrial revolution, diverting massive amounts of resources to them, and ruining HN with 10 articles on the front page per day about how software engineering is dead.

by applfanboysbgon

5/14/2026 at 5:59:01 PM

> ruining HN with 10 articles on the front page per day about how software engineering is dead.

Even this article, which is theoretically about playing games on a MacBook and not about AI, has devolved into AI discussions. It's honestly kind of tiring.

I suppose the article invites it by putting an AI blurb up top, and I suppose I'm also not helping by adding my own comment, but _still_.

by dvlsg

5/14/2026 at 5:49:13 PM

> This is also very bad and people complain about these things all the fucking time.

So at worst these AI tools are as bad as the existing system. Worth complaining about? Absolutely. Worth holding to much higher standards? Nah I don't think so. Not at this stage at least. And folks are just disappointing themselves by setting up straw men expectations.

These tools are non-deterministic systems (like humans) which sometimes don't do exactly what you want (like humans) but are also extremely fast, much cheaper (for now), and have domain knowledge generation that is much broader than any single human has. Like anything else, there are pros and cons.

by ericmay

5/14/2026 at 5:51:58 PM

They aren't "straw man expectations" when the entire US economy is now oriented around those expectations.

by applfanboysbgon

5/14/2026 at 6:20:55 PM

The NYT writes things that are factually incorrect. Why do we have such a high standard for these models when we don't apply them elsewhere?

The New York Times publishes a "corrections" section in each issue. Let me know where I can view the 60TB file where ChatGPT fesses up to its daily fails.

by reaperducer

5/14/2026 at 7:00:08 PM

"Things exist as they are today and can't possible change or improve in the future".

People lie all the time too. You're just radicalizing yourself to create a bias for no reason other than concocting a straw man expectation that you made up for yourself. What's the point of that?

by ericmay

5/14/2026 at 8:21:00 PM

But people want to do their taxes with these things lmfao

by dakolli

5/14/2026 at 5:02:42 PM

LLMs are (broadly-speaking) poorly-positioned to give you a strong verdict on plausibility of a frontier topic. That said - ChatGPT was exactly right in its response to OP!

"Very deep", "border-line impractical" "in a research-sense" is the perfect summary of this article itself! :)

by corry

5/14/2026 at 5:13:34 PM

Watching the entire economy of a superpower and ~all of online culture go absolutely ga-ga over Furbys has been one of the weirdest things I've ever witnessed.

by funimpoded

5/14/2026 at 8:25:30 PM

Watching the entire economy of a superpower bet its entire future on SOTA text autocomplete models has been interesting to watch (which I think you're referring too).

Previous Empires naively bet their entire future on the words of magicians, or people who claimed they could look into water, the sky and fire and tell you what the future is going to be.

Machine Learning Engineers are the modern day Empire's court magician.

by dakolli

5/14/2026 at 5:44:39 PM

Eh, in this use case it's more like a goofy search engine.

by Apocryphon

5/14/2026 at 4:50:42 PM

This is why i use grok expert mode. It agressivly goes out searching the web for info. Its so much better then relying on year old data.

by perarneng

5/14/2026 at 4:55:10 PM

Yes, I really like that about Grok. It had a few good qualities but it was too verbose so now it's mostly Claude.

by _blk

5/14/2026 at 4:55:45 PM

Solid compromise is Kagi's research assistant. Aggressively cites, unlike Claude. Concise, unlike Grok.

by JumpCrisscross

5/14/2026 at 7:28:26 PM

I argued with GPT-OSS 120B about cascade lake Xeon workstation CPU parts not having a GPU when it vehemently said otherwise

by Tsiklon

5/14/2026 at 5:00:36 PM

At least ChatGPT is now aware that Codex exists. I have a chat, still in my history, from a few months ago, in which I asked for help wrangling npm to get @openai/codex working, and ChatGPT said:

> Important: Codex CLI no longer exists

> OpenAI discontinued the Codex model + CLI a while back. There is no official binary named codex in any current OpenAI npm packages. OpenAI’s current CLI tool is:

    npm install -g openai
> which installs the openai command, not codex.

The world knowledge of these models is not necessarily up to date :)

edit: I replayed the same prompt into current ChatGPT and it is less clueless now. Maybe OpenAI noticed that it was utterly dumb that GPT-5.whatever didn't believe that Codex existed and fine-tuned it.

by amluto

5/14/2026 at 5:11:13 PM

>The world knowledge of these models is not necessarily up to date :)

It's amazing how this still needs to be said. Codex was released in April 2025. The initial GPT-5 and 5.1 still had a knowledge cutoff in late 2024. Like, what did you expect? Always beware the knowledge cutoff for LLMs (although recent releases have gotten much better with researching the web for updates before answering modern software topics).

by sigmoid10

5/14/2026 at 6:21:10 PM

OpenAI being more aware of the implications would help too--last year I also struggled with using Codex to write scripts to run Codex headless, because it kept insisting that Codex was a retired model from the GPT-3 days and not a program that could be called by a script.

by z2

5/14/2026 at 4:52:01 PM

It’s training data only goes up to late 2024 or early 2025 so that might be why, though it does have access to the internet.

by simonh

5/14/2026 at 5:00:44 PM

Yeah, the solution was to link it to the nvidia page of the card, then it was like, 'oh, okay.' But at that point, I lost faith in it's ability to provide me with the information I was looking for. If it's information is so out of date that it doesn't know about the 5000 series, how could I be confident that it knew the details I was asking about (game engine related research)?

by mywittyname

5/14/2026 at 5:24:39 PM

Are you using the instant model?

by asats

5/14/2026 at 9:55:36 PM

No one ever is willing to share a link to the ChatGPT or Claude session when I ask follow-up questions.

by mh-

5/14/2026 at 6:26:56 PM

You're holding it wrong.

by reaperducer

5/15/2026 at 12:47:45 AM

[flagged]

by anomaly_

5/14/2026 at 4:55:31 PM

Depending on your ChatGPT settings...

by weird-eye-issue

5/14/2026 at 6:15:36 PM

Very nice effort. This has incredible technical depth, particularly in the DMA and QEMU sections. I also like that you didn't oversell it as the ideal Mac gaming solution. I found the AI inference results to be the most fascinating. Overall, it was a great read.

by SamiahAman

5/15/2026 at 1:58:32 AM

It renders according to the Blackwell and Hopper 100.

by rballpug

5/14/2026 at 8:49:38 PM

The lack of native games on Apple Silicon is one of the greatest crimes ever committed against computing.

I got Fallout 3 working on my M2 MBP as well as it did on Windows back in the day. Temps were cool, battery was decent. If they sold my college years gaming collection (15-ish years ago) in a way that ran natively through GoG or Steam, I'd buy every single title.

by lenerdenator

5/15/2026 at 8:53:34 AM

Skyrim runs well [1] on my M2 mac mini through crossover and rosetta. So most older games will run even better.

The real question is what happens when they drop Rosetta. They promised they'll keep the APIs related to running 32 bit games but can we trust them?

[1] Not at 8k 240 fps of course.

by nottorp

5/15/2026 at 2:29:52 AM

Porting games natively to macOS is a waste of developer time. Apple has already depreciated vast swathes of 32-bit games that were never updated to support 64-bit x86 or Apple Silicon. Developers that give macOS the same level of attention as Windows don't get the same level of support that Microsoft offers in return.

Not to mention that Mac owners are a minority share of the PC gaming market. Linux has the right idea, if you don't translate the games then you'll never have true preservation.

by bigyabai

5/15/2026 at 12:37:16 AM

Awesome dude! Extra fan on the desk too :)

by inforemix

5/15/2026 at 2:08:29 AM

I just want to point out that anything you ask ChatGPT about that hasn't been discussed 1000 times on Reddit or Wikipedia is going to be wrong, and it will only be "right" in the sense that it aligns with the artificial consensus created on those platforms.

Of course the author probably did that as a joke.

by neuroelectron

5/15/2026 at 2:17:13 AM

Pretty much! A precedent-fueled prediction engine can’t predict the unprecedented.

by MikeNotThePope

5/15/2026 at 2:30:26 AM

It (LLMs in general) actually can make some very prescient hallucinations by making similar inferences across dissimilar domains, but they have since removed that feature to prevent liability and libel. GPT3 was much more useful in this capacity, especially before they started stress testing it on 4chan (Jan 2023)

by neuroelectron

5/14/2026 at 5:35:23 PM

Once egpus work on Apple Silicon there will be little reason to own a pc

by zer0zzz

5/14/2026 at 8:38:20 PM

Been hearing this for over a decade, except back then it was eGPU in Intel Macs which were closer to other PCs if anything. Even if this didn't require so much DIY and if Thunderbolt could do PCIe speeds, most people don't want to add drama when they can just use a PC with regular PCIe slots and native compatibility with Nvidia. The native way already has enough edge cases without adding an unusual setup.

by traderj0e

5/14/2026 at 5:56:03 PM

Mac GPU isn't the bottleneck for most games. Compatibility is.

by bel8

5/14/2026 at 6:12:39 PM

I assume your reasons are different to mine so for your reasons it might very well be true. But for my reasons definitely not as long as Apple Silicon can't run Linux somewhat decently natively - and even then, it's still an Apple..

by _blk

5/15/2026 at 1:12:27 AM

The only thing Apple silicon has going for it is power use and that gap is getting closed. I can't really see any reason why I would switch to Mac, it just seems like you pay a lot more for a closed expensive environment that fights you at every step.

I'll never pay anyone for a developer licence or fee either. They can sponsor me to port my software to their platform.

by jaimex2

5/14/2026 at 7:07:51 PM

Just built a workstation with an older Threadripper Pro. It has 128 PCIE lanes, for 7 16-lane PCIE slots. An egpu has 4. I have one GPU, at x16, and I can add more.

Most people don't need that, but most people don't need an eGPU either. The number of gamers who would switch to Macbook+eGPU is negligible. It's just not compelling. For LLMs, hanging a 5090 off the thunderbolt port makes prompt processing fast, but I will be surprised if the M6 doesn't come with silicon just for that, as its the current gap. M5 is quite adequate for token generation for the price, given the RAM quantity and bandwidth. An M6 that accelerates TTFT would make an eGPU irrelevant.

For gaming, the threadripper gets at least +50FPS for windows vs linux, and some games just freeze for periods of time on linux with things like dynamic frame generation. I have an SSD for windows just for gaming.

by lowbloodsugar

5/14/2026 at 8:13:38 PM

> The number of gamers who would switch to Macbook+eGPU is negligible. It's just not compelling.

This. eGPUs fade in and out of relevance every few years, and even back in the Intel Macbook days there were people advocating for eGPU gaming with Bootcamp. It was a terrible solution, there is every reason to avoid macOS with a dGPU when you have something like Linux or even Windows as an alternative.

by bigyabai

5/14/2026 at 9:32:46 PM

Thats also because we keep trying to use terrible interconnects. If we get an interconnect with a proper latency spec things might change

by Melatonic

5/14/2026 at 10:03:13 PM

[dead]

by hacker_mar

5/15/2026 at 1:01:02 AM

Man, Apple fans are still proving the stereotype to be accurate after 20 years.

Ignoring the fact that the Mac OS gets in your way every time you try to do something that Apple doesn't like, with no guarantee that an update won't break anything existing, ignoring the fact that Macs are non repairable, non upgradable, ignoring the fact that they don't support multiple displays flawlessly, I hope you realize that egpu support natively is NEVER coming to Macs, because why the fuck would they enable it when they can just charge you full price for a desktop computer? Apple is built on the sole image that Apple users have money, so buying another Mac Mini or Mac Pro in addition to your laptop is what you are supposed to do.

Android is way ahead of Mac with Android Desktop mode and Samsung Dex, to the point where you don't even need to own a laptop anymore. Ive been using my S24/S25 with lapdock for over 3 years now as a laptop, and it works flawlessly. Apple can easily do this with iPhone, but they won't because that means one less macbook purchase.

by ActorNightly

5/15/2026 at 1:20:13 AM

I'm impressed by the effort and the technical know-how.

Another part of me is almost annoyed that Apple's complete apathy toward obvious computing use cases like this is rewarded by a project like this. I feel like Macs and macOS should not be rewarded for being so difficult to extend and use outside of Apple's narrow vision of the use case of their hardware.

Apple used to support this use case wholeheartedly, but we can see that it's abandoned on their end: Intel-only, and the newest generation of AMD GPUs supported are the 6000 series: https://support.apple.com/en-us/102363

I got tired of rewarding Apple for refusing to make a computer that makes the most of the technology available. This stuff is all a lot worse than just moving over to Linux or even Windows. With hardware like the Framework 13 Pro coming out, along with a surprisingly good set of premium PC laptops, I really don't think the Mac hardware is worth it anymore. Others have legitimately caught up, especially with Apple's aging MacBook Pro chassis with the horrible notch.

by dangus

5/14/2026 at 7:09:12 PM

where did you get a 5090 I will buy it from you

by semiinfinitely

5/14/2026 at 11:51:45 PM

what keyboard is that

by s09dfhks

5/15/2026 at 1:00:16 AM

custom zoom75

by scottjg

5/14/2026 at 7:27:20 PM

> step one in most of my projects now is to ask AI about it. Maybe it’ll tell me something I don’t know.

Bingo. This is exactly how I use LLM. I like getting a gut check, seeing what the first recommendations are or if there is some deep flaw in what I think the approach is, and I almost never copy/paste whatever it spits back or just follow its instructions.

by Forgeties79

5/14/2026 at 5:46:20 PM

damn

by sharathdoes

5/14/2026 at 5:46:51 PM

lol, is there a list of games tho, which mac pro's can support

by sharathdoes

5/14/2026 at 7:35:00 PM

i mean porbly

by null-phnix

5/14/2026 at 10:02:23 PM

[dead]

by hacker_mar

5/14/2026 at 10:38:24 PM

[flagged]

by AIMC

5/14/2026 at 4:23:45 PM

Wow, phenomenal project and write-up, thanks for sharing it.

"no - not in any practical sense today, and "maybe" only in a very deep, borderline-impractical research sense."

This is why humans will always rule over crappy LLMs.

by moralestapia

5/14/2026 at 4:32:45 PM

Wait, why? This is exactly what I as a human would have said in this situation.

Or if you're referring to how the OP still decided to go ahead, I've seen AIs go ahead on impractical courses of action many times, and surprisingly succeed on some of them.

by falcor84

5/15/2026 at 12:20:17 AM

in fairness to the LLM critics, every time i ran into a minor speed bump in this project, it told me it probably just wasn't possible to get it to work well. the LLM did pretty actively discourage me from trying to get the whole thing working.

that said, since i was willing to ignore that aspect of it, it did accelerate getting the work done by a lot. it seems like it understands system programming really well, and did a good job navigating the qemu codebase. i have ~20 years of systems programming experience so i already knew what had to be done here. it didn't really guide the project much, but it did write a lot of the code.

by scottjg

5/14/2026 at 4:40:05 PM

And I see that you succeeded in not doing it.

Congrats! Each one got what they wanted :).

by moralestapia

5/14/2026 at 4:28:04 PM

I believe that LLM (and ML in general) tools really shine when they are developed and used AS tools.

Unfortunately, I also believe that market forces may push away from this direction, as LLM companies try to capture the value stream

by csours

5/14/2026 at 7:13:27 PM

Every major moment in my career has been me doing something that another human or clique of humans has said is impossible. If you think this is purely an LLM trait, I can't imagine you've tried to achieve anything important in the real world.

by lowbloodsugar

5/14/2026 at 4:31:34 PM

Exactly. AI psychosis is real.

Never let an AI tell you that you cannot do something practical for your own self for research, discovery or for fun.

The only thing that is close to impractical is expecting your non-technical friends or others to follow you without any incentive or benefit.

by rvz

5/14/2026 at 4:50:07 PM

> As much as I hate to admit it, step one in most of my projects now is to ask AI about it. Maybe it’ll tell me something I don’t know.

It’s these people, not the ones who refuse to use LLMs, who are as they say, “cooked”.

by nothinkjustai

5/14/2026 at 5:32:13 PM

The author of the blog is not cooked; they're raw. Their inventive, multi-chain setup was tuff. Their PCI passthrough and qemu patches were straight fire. Unless you can point to something you've done this impressive, you're just an unc bro.

by linkregister

5/15/2026 at 12:49:28 AM

Fuck you got me there. Unc out

by nothinkjustai