3/8/2026 at 8:38:32 PM
I've been thinking about whether extreme weight compression could fundamentally change the hardware requirements for large language models.Most LLM deployments assume large GPU clusters mainly because of memory constraints (VRAM / RAM). But if weights are aggressively compressed — for example using ternary representations ({-1, 0, +1}) — the memory footprint drops dramatically.
In theory this could reduce model size by roughly an order of magnitude compared to FP16 weights.
If you combine that with:
• dynamic sparsity • memory-mapped weight streaming from NVMe • speculative decoding • fast tensor unpacking on GPU/Metal
it raises an interesting possibility:
Could extremely large models (100B–500B+) become runnable on consumer machines, even if they stream weights from SSD instead of holding everything in RAM?
Of course bandwidth, latency, and compute efficiency become major bottlenecks.
I'm curious if anyone here has experimented with:
• ternary / ultra-low-bit networks • SSD-streamed inference • sparse LLM architectures • MoE-style routing combined with quantization
Would love to hear thoughts on whether this approach is realistic or fundamentally limited by bandwidth and compute.
by fatihturker
3/9/2026 at 2:37:04 PM
My attempts to try ternary encodings from Unsloth with llama.cpp on ROCm failed miserably. Either ggml or ROCm simply can't run it at this time on gfx1201, and CPU isn't fast enough.by MrDrMcCoy