alt.hn

3/8/2026 at 8:38:32 PM

Could ternary weights make 500B models runnable on consumer hardware?

https://opengraviton.github.io

by fatihturker

3/8/2026 at 8:38:32 PM

I've been thinking about whether extreme weight compression could fundamentally change the hardware requirements for large language models.

Most LLM deployments assume large GPU clusters mainly because of memory constraints (VRAM / RAM). But if weights are aggressively compressed — for example using ternary representations ({-1, 0, +1}) — the memory footprint drops dramatically.

In theory this could reduce model size by roughly an order of magnitude compared to FP16 weights.

If you combine that with:

• dynamic sparsity • memory-mapped weight streaming from NVMe • speculative decoding • fast tensor unpacking on GPU/Metal

it raises an interesting possibility:

Could extremely large models (100B–500B+) become runnable on consumer machines, even if they stream weights from SSD instead of holding everything in RAM?

Of course bandwidth, latency, and compute efficiency become major bottlenecks.

I'm curious if anyone here has experimented with:

• ternary / ultra-low-bit networks • SSD-streamed inference • sparse LLM architectures • MoE-style routing combined with quantization

Would love to hear thoughts on whether this approach is realistic or fundamentally limited by bandwidth and compute.

by fatihturker

3/9/2026 at 2:37:04 PM

My attempts to try ternary encodings from Unsloth with llama.cpp on ROCm failed miserably. Either ggml or ROCm simply can't run it at this time on gfx1201, and CPU isn't fast enough.

by MrDrMcCoy

3/9/2026 at 5:19:36 PM

I am no expert on the matter but I always thought ternery weight should be part of the neural net nature, trained on those, rather than a compression mesure for inference. Are they any training made on ternery weight models that are proven to be effective?

by jeanloolz

3/8/2026 at 9:02:19 PM

One question I’m particularly curious about:

At what point does SSD bandwidth become the main bottleneck for inference when weights are heavily compressed? If anyone has experience with streaming layers or low-bit runtimes, would love to hear how you approach it.

by fatihturker

3/8/2026 at 8:58:19 PM

So MS-BitNet advanced next-gen, or what?

by LargoLasskhyfv

3/8/2026 at 9:05:06 PM

It’s inspired by ideas similar to BitNet, but I wouldn’t call it “next-gen BitNet.” BitNet focuses mainly on model representation, while OpenGraviton is about inference — pushing the limits of running large models efficiently on consumer hardware. Similar motivation (more efficient models), different layer (inference engine).

by fatihturker

3/8/2026 at 8:38:56 PM

[dead]

by fatihturker