1/23/2026 at 3:50:50 PM
CPU? Good luck.by throwaway2027
1/24/2026 at 10:39:21 AM
WDYM? I don't want to train a model, only use inference. From what I know it must be much cheaper to buy "normal" ram + a decent CPU vs a GPU with similar amounts of vram.The bottleneck of the inference is fitting a good enough model into memory. A 80B param model 8bit fp quantization equates to roughly ~90GB ram. So 2x64GB DDR4 sticks is probably the most price efficient solution. The questions is: Is there any model which is capable enough to consistently deal with an agentic workload?
by baalimago