alt.hn

4/18/2026 at 12:37:53 AM

Show HN: Llama.cpp Tutorial 2026: Run GGUF Models Locally on CPU and GPU

by anju-kushwaha

4/18/2026 at 3:09:36 PM

Ive been trying to run local, effectively followed this guide (before the guide existed), and have not had any success. Llama builds fine, and then when i start it up, it just indefinitely spins its progress bar. I left it sit for 3 days and nada.

Running on an 8core 12gb ram vm, which has an amd rx5500xt (8gb) passed through. ROCm built, llama built with the correct flags.

What am i missing?

by CableNinja

4/18/2026 at 3:41:50 PM

Logs to troubleshoot, for starters.

by washadjeffmad

4/19/2026 at 1:20:25 PM

[dead]

by ANTHONY6632