alt.hn

3/1/2026 at 1:24:54 AM

Running a One Trillion-Parameter LLM Locally on AMD Ryzen AI Max+ Cluster

https://www.amd.com/en/developer/resources/technical-articles/2026/how-to-run-a-one-trillion-parameter-llm-locally-an-amd.html

by mindcrime

3/1/2026 at 2:11:50 AM

Cool that it's possible but basically unusable performance characteristics. For an 8192 token prompt they report a ~1.5 minute time-to-first-token and then 8.30tk/s from there. For context ChatGPT is typically <<1s ttft and ~50tk/s.

by ibeckermayer

3/1/2026 at 12:54:04 PM

I've never understood the obsession with token/s. I'm fine with asking a question and then going on to another task (which might be making coffee).

Even with a cloud-based LLM where the response is pretty snappy, I still find that I wander off and return when I am ready to digest the entire response.

by JKCalhoun

3/1/2026 at 8:53:50 PM

Your workflow is unusual, oftentimes there is a vigorous back and forth, or a desired output like code generation, etc where a low tk/s drastically effects ux and user productivity.

But the real kicker here is the 90s ttft, that means you ask a question and don't see anything for a full minute and a half.

by ibeckermayer

3/1/2026 at 1:59:21 PM

You are fine with it. But may be rest of the world is not. Anyway, to compare performance/benchmark, we need metrics and this is one of the basic metric to measure.

by nitinreddy88

3/1/2026 at 8:44:33 AM

Given that APU only has 4 channels isn't this setup comically starved for bandwidth? By the same token, wouldn't you expect performance to scale approximately linearly as you add additional boxes? And wouldn't you be better off with smaller nodes (ie less RAM and CPU power per box)?

If I'm right about that then if you're willing to go in for somewhere in the vicinity of $30k (24x the Max 385 model) you should be able to achieve ChatGPT performance.

by fc417fc802

3/1/2026 at 9:06:43 PM

Good thought... I think you're wrong because the dominant factor is bandwidth over the interconnect. In this case they're using 5Gbps over Ethernet; compare that to 80-120 Gbps for a Thunderbolt 5 connected Mac Studio cluster: https://www.youtube.com/watch?v=bFgTxr5yst0

by ibeckermayer

3/1/2026 at 9:52:40 PM

> I think you're wrong because the dominant factor is bandwidth over the interconnect.

Is it? Why do you say that? I understand inference to be almost entirely bottlenecked on memory bandwidth.

There are n^2 weights per layer but only n state values in the vector that exists between layers. Transmitting a few thousand (or even tens of thousands) of fp values does not require a notable amount of bandwidth by modern standards.

Training is an entirely different beast of course. And depending on the workload latency can also impact performance. But for running inference with a single query from a single user I don't see how inter-node bandwidth is going to matter.

by fc417fc802

3/1/2026 at 2:31:49 AM

That’s pretty awesome!

Though only 5gig Ethernet? Can’t they do usb-c / thunderbolt 40 Gb/s connections like Macs?

by elcritch

3/1/2026 at 9:04:56 AM

It's sad that NDA fetishist Broadcom has a de-facto monopoly on PCIe fabric switches; notably we would have functional open source drivers for at least simpler topologies for a while now, and could just set up cheap FNN topologies by using (usually NMVe targeted) bifurcation support on hosts to get several x4 ports with only a comparatively cheap retimer out into "mini SAS hd" (the square shaped 4-Lane connectors) or QSFP+ ports; and then have a few meters reach on generic DAC cables from such standards (even Skylake-era SAS ones (nominally 12 GT/s; PCIe4.0 is 16 GT/s) should typically manage PCIe4; that's just under 64 Gbit/s from each link, with typical desktop/gaming systems delivering 3~5 links without complaints next to a dGPU (that one at fewer than full lanes).

by namibj

3/1/2026 at 7:17:10 AM

> Though only 5gig Ethernet? Can’t they do usb-c / thunderbolt 40 Gb/s connections like Macs?

Does the network speed matter that much when TFA talks about outputting a few tens of tokens per second? Ain't 5 Gbit/s plenty for that? (I understand the need to load the model but that'd be local already right?)

by TacticalCoder

3/1/2026 at 11:58:02 AM

Running inference requires sharing intermediate matrix results between nodes. Faster networking speeds that up.

by elcritch

3/1/2026 at 12:14:02 PM

I read (but cannot find this anymore) that the information sent from layer to layer is minimal. The actual matrix work happens within a layer. They are not doing matrix multiplication over the netwerk (that would be insane latency wise).

by wokkel

3/1/2026 at 6:51:00 AM

I really wonder if AMD is going to keep getting walloped on the interconnect or if they'll start upping what's available to consumers, at some point.

by jauntywundrkind

3/1/2026 at 2:56:29 AM

I set up ollama today and can barely run a 3b parameter model before the lag makes it unbearable.

How much is one of these gonna run me?

by tills13

3/1/2026 at 5:41:07 AM

I've been pretty happy with my Framework Desktop, though I managed to snag it before RAM prices shot through the roof. Currently, a tricked out model is around $2500.

https://frame.work/desktop

Mine sees more use as a Steam machine, but it can run decently large models. Ollama was trivial to get working, and qwen3-coder-next spits out paragraphs of text/code in seconds. I don't really do anything with that, but it's fun to mess around with. (LLMs are still pretty bad at assembly language.)

by zeta0134

3/1/2026 at 5:49:03 AM

You can buy a 128GB mainboard from framework for $2300, so maybe somewhere a bit over $9k by the time you've got power, storage, cables, racks (they sell those too). I was thinking about getting into one of these Strix Halo setups but decided to go a slightly different route with a lot higher TDP, better throughput, and a bit less VRAM.

https://frame.work/products/framework-desktop-mainboard-amd-...

by jcgrillo

3/1/2026 at 2:04:48 AM

The setup was around $10k, but maybe more now with mem/ssd prices.

This is a good list, I like my Beelink a lot, my Minisforum likes to turn itself off every couple of weeks, not sure why yet.

https://www.techradar.com/pro/there-are-15-amd-ryzen-ai-max-...

---

Performance is pretty bad (<10/tps) and context is quite limited. Still good to see progress

Prompt Size (tokens) | TFT (s) - Flash Attention Disabled | TFT (s) - Flash Attention Enabled

4096 | 53.7s | 39.7s

8192 | Out Of Memory (OOM) | 90.5s

16384 | Out Of Memory (OOM) | 239.1s

by verdverm

3/1/2026 at 4:48:13 AM

> Minisforum likes to turn itself off every couple of weeks, not sure why yet

AFAICT, the answer is "because Minisforum". I don't know if they have a design principle that they should run their systems near the edge of the thermal envelope or what, but Minisforum is the only brand I've had consistent trouble with stability on. My last one got to where it stopped booting altogether, just looped. Since then I've written off Minisforum as a brand, just not worth the hassle.

by rootusrootus

3/1/2026 at 6:16:00 AM

I would try to add a fan or other cooling; my guess is that the CPU is handling thermal properly but something else is not.

by shrubble

3/1/2026 at 1:58:58 PM

Reply to one, but appreciate all three replies, thank you! Thermal is a good call, I have some cpu pastes around

by verdverm

3/1/2026 at 7:23:09 AM

I put one of those notebook cooling pads underneath, that usually does the trick

by gmerc

3/1/2026 at 2:50:36 AM

Framework has gone fully in the tank of Apple consumerization route of unrepairability and unupgradeability with a nonstandard machine, soldered-on RAM, and no meaningful PCIe slots. There's only the superficial appearance of longevity and future-proofness when it's really yet another silo. There's no way to add an IB, FC, or 100/400 GbE NICs to these machines. 5 GbE is a joke. Non-ECC RAM is a joke.

by burnt-resistor