4/16/2026 at 6:57:31 AM
For most users that wanted to run LLM locally, ollama solved the UX problem.One command, and you are running the models even with the rocm drivers without knowing.
If llama provides such UX, they failed terrible at communicating that. Starting with the name. Llama.cpp: that's a cpp library! Ollama is the wrapper. That's the mental model. I don't want to build my own program! I just want to have fun :-P
by cientifico
4/16/2026 at 7:10:09 AM
Llama.cpp now has a gui installed by default. It previously lacked this. Times have changed.by anakaine
4/16/2026 at 7:21:34 AM
Having read above article, I just gave llama.cpp a shot. It is as easy as the author says now, though definitely not documented quite as well. My quickstart:brew install llama.cpp
llama-server -hf ggml-org/gemma-4-E4B-it-GGUF --port 8000
Go to localhost:8000 for the Web UI. On Linux it accelerates correctly on my AMD GPU, which Ollama failed to do, though of course everyone's mileage seems to vary on this.
by nikodunk
4/16/2026 at 7:55:22 AM
Was hoping it was so easy :) But I probably need to look into it some more.llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'gemma4' llama_model_load_from_file_impl: failed to load model
Edit: @below, I used `nix-shell -p llama-cpp` so not brew related. Could indeed be an older version indeed! I'll check.
by teekert
4/16/2026 at 11:14:51 AM
As it has been discussed in a few recent threads on HN, whenever a new model is released, running it successfully may need changes in the inference backends, such as llama.cpp.There are 2 main reasons. One is the tokenizer, where new tokenizer definitions may be mishandled by the older tokenizer parsers.
The second reason is that each model may implement differently the tool invocations, e.g. by using different delimiter tokens and different text layouts for describing the parameters of a tool invocation.
Therefore running the Gemma-4 models encountered various problems during the first days after their release, especially for the dense 31B model.
Solving these problems required both a new version of llama.cpp (also for other inference backends) and updates in the model chat template and tokenizer configuration files.
So anyone who wants to use Gemma-4 should update to the latest version of llama.cpp and to the latest models from Huggingface, because the latest updates have been a couple of days ago.
by adrian_b
4/16/2026 at 8:40:05 AM
I just hit that error a few minutes ago. I build my llama.cpp from source because I use CUDA on Linux. So I made the mistake of trying to run Gemma4 on an older version I had and I got the same error. It’s possible brew installs an older version which doens’t support Gemma4 yet.by roosgit
4/16/2026 at 8:50:40 AM
Ah it was indeed just that!I'm now on:
$ llama --version version: 8770 (82764d8) built with GNU 15.2.0 for Linux x86_64
(From Nix unstable)
And this works as advertised, nice chat interface, but no openai API I guess, so no opencode...
by teekert
4/16/2026 at 9:00:46 AM
check on same port, there is an OpenAI API https://github.com/ggml-org/llama.cpp/tree/master/tools/serv...by homarp
4/16/2026 at 9:33:47 AM
Good stuff, thanx!by teekert
4/16/2026 at 8:45:42 AM
And that's exactly why llama.cpp is not usable by casual users. They follow the "move fast and break things" model. With ollama, you just have to make sure you're getting/building the latest version.by zozbot234
4/16/2026 at 10:20:51 AM
Its not possible to run the latest model architectures without 'moving fast'. The only thing broken here is that they are trying to use an old version with a new model.by Eisenstein
4/16/2026 at 10:38:36 AM
and Ollama suffered the same fate when wanting to try new modelsby cyanydeez
4/16/2026 at 1:31:58 PM
What fate?by Eisenstein
4/16/2026 at 5:18:20 PM
the impedance mismatch between when models are released and the capability of Ollama and other servers capability for use.by cyanydeez
4/16/2026 at 6:17:26 PM
I'm a bit unsure what that has to do with someone running an outdated version of the program while trying to use a model that is supported in the latest release.by Eisenstein
4/16/2026 at 7:12:40 AM
While that might be true, for as long as its name is “.cpp”, people are going to think it’s a C++ library and avoid it.by OtherShrezzing
4/16/2026 at 7:38:40 AM
This is the first I'm learning that it isn't just a C++ library.In fact the first line of the wikipedia article is:
> llama.cpp is an open source software library
by eterm
4/16/2026 at 7:36:17 AM
It would make sense to just make the GUI a separate project, they could call it llama.gui.by RobotToaster
4/16/2026 at 4:58:57 PM
It would make even more sense to rename it to ollama, get a copyright for the name, and see how thieves complain they've been robbed :>by gettingoverit
4/16/2026 at 9:02:56 AM
it is called llama-barn https://github.com/ggml-org/LlamaBarnby homarp
4/16/2026 at 11:19:26 AM
LlamaBarn is the MacOS app, not the HTTP API server, which is "llama-server".On non-Apple PCs, "llama-server" is what you use, and you can connect to it either with a browser or with an application compatible with the OpenAI API.
Perhaps using "llama-server" as the name of the project would have been less confusing for newbies than "llama.cpp".
I confess that when I first heard about "llama.cpp" I also thought that it is just a library and that I have to write my own program in order to implement a complete LLM inference backend.
by adrian_b
4/17/2026 at 7:35:16 AM
this looks nice but is macos only.by mastermage
4/16/2026 at 7:43:02 AM
This is correct, and I avoided it for this reason, did not have the bandwidth to get into any cpp rabbit hole so just used whatever seemed to abstract it away.by figassis
4/17/2026 at 3:58:52 PM
Wait, it isn't? The name very strongly suggests that it is a text file containing C++ source code; is that not the case?by marssaxman
4/16/2026 at 7:22:21 AM
Frankly I think the cli UX and documentation is still much better for ollama.It makes a bunch of decisions for you so you don't have to think much to get a model up and running.
by mijoharas
4/16/2026 at 1:45:00 PM
I don't care about the GUI so much. Ollama lets me download, adjust and run a whole bunch of models and they are reasonably fast. Last time I compared it with Llama.cpp, finding out how to download and install models was a pain in Llama.cpp and it was also _much_ slower than Ollama.by zombot
4/19/2026 at 9:44:36 AM
Having picked it up recently and compared to both llama and lm studio - the models I was using ran faster, used less memory, and had a few extra confif options available that the others hadn't implemented yet but were suggested by the model authors.It was easy to install, run, and access the gui to get going.
by anakaine
4/16/2026 at 3:02:24 PM
That is not true.If you today visit a models page on huggingface, the site will show you the exact oneliner you need to run to it on llama.cpp.
I didn't measure it, but both download and inference felt faster than ollama. One thing that was definitely better was memory usage, which may be important if you want to run small models on SCB.
by throwa356262
4/16/2026 at 12:09:15 PM
"LM Studio… Jan… Msty… koboldcpp…"Plenty of alternatives listed. Can anyone with experience suggest the likely successor to Ollama? I have a Mac Mini but don't mind a C/L tool.
I think, as was pointed out, Ollama won because of how easy it is to set up, pull down new models. I would expect similar for a replacement.
by JKCalhoun
4/16/2026 at 10:54:11 PM
If you don't want to have to think about it, LM Studio is probably the best choice.by Zetaphor
4/16/2026 at 8:38:16 AM
How about kobold.cpp then? Or LMStudio (I know it's not open source, but at least they give proper credit to llama.cpp)?Re curation: they should strive to not integrate broken support for models and avoid uploading broken GGUFs.
by samus
4/16/2026 at 8:41:40 AM
> For most users that wanted to run LLM locally, ollama solved the UX problemThis does not absolve them from the license violation
by ekianjo
4/16/2026 at 9:28:30 AM
agree. We can easily compare it with docker. Of course people can use runc directly, but most people select not to and use `docker run` instead.And you can blame docker in a similar manner. LXC existed for at least 5 years before docker. But docker was just much more convenient to use for an average user.
UX is a huge factor for adoption of technology. If a project fails at creating the right interface, there is nothing wrong with creating a wrapper.
by omgitspavel
4/16/2026 at 8:01:57 AM
>solved the UX problem.>One command
Notwithstanding the fact that there's about zero difference between `ollama run model-name` and `llama-cpp -hf model-name`, and that running things in the terminal is already a gigantic UX blocker (Ollama's popularity comes from the fact that it has a GUI), why are you putting the blame back on an open source project that owes you approximately zero communication ?
by well_ackshually
4/16/2026 at 10:26:03 AM
> Notwithstanding the fact that there's about zero difference between `ollama run model-name` and `llama-cpp -hf model-name`There is a TON of difference. Ollama downloads the model from its own model library server, sticks it somewhere in your home folder with a hashed name and a proprietary configuration that doesn't use the in built metadata specified by the model creator. So you can't share it with any other tool, you can't change parameters like temp on the fly, and you are stuck with whatever quants they offer.
by Eisenstein
4/16/2026 at 1:44:29 PM
This was my issue with current client ecosystem. I get a .guff file. I should be able to open my AI Client of choice and File -> Open and select a .guff. Same as opening a .txt file. Alternatively, I have cloned a HF model, all AI Clients should automatically check for the HF cache folder.The current offering have interfaces to HuggingFace or some model repo. They get you the model based on what they think your hardware can handle and save it to %user%/App Data/Local/%app name%/... (on windows). When I evaluated running locally I ended up with 3 different folders containing copies of the same model in different directory structures.
It seems like HuggingFace uses %user%/.cache/.. however, some of the apps still get the HF models and save them to their own directories.
Those features are 'fine' for a casual user who sticks with one program. It seems designed from the start to lock you into their wrapper. In the end they are all using llama cpp, comfy ui, openvino etc to abstract away the backed. Again this is fine but hiding the files from the user seems strange to me. If you're leaning on HF then why now use their own .cache?
In the end I get the latest llama.cpp releases for CUDA and SYCL and run llama-server. My best UX has been with LM Studio and AI Playground. I want to try Local AI and vLLM next. I just want control over the damn files.
by alzoid
4/16/2026 at 2:58:59 PM
Check out Koboldcpp. The dev has a specific philosophy about things (minimal or no dependencies, no installers, no logs, don't do anything to user's system they didn't ask for explicitly) that I find particularly agreeable. It's a single exec and includes the kitchen sink so there is no excuse not to try it.by Eisenstein
4/16/2026 at 6:12:01 PM
That's one of my major annoyances with the current state of local model infrastructure: All the cruft around what should be a simple matter of downloading and using a file. All these cache directories and file renaming and config files that point to all of these things. The special, bespoke downloading cli tools. It's just kind of awkward from the point of view of someone who is used to just using simple CLI tools that do one thing. Imagine if sqlite3 required all of these paths and hashes and downloaders and configs rather than letting you just run: sqlite3 my database.db
by ryandrake
4/16/2026 at 8:04:17 AM
> Ollama's popularity comes from the fact that it has a GUIIt's not the GUI, it's the curated model hosting platform. Way easier to use than HF for casual users.
by zozbot234
4/16/2026 at 8:59:22 AM
It also made easy for casual users to think that they were running deepseek.by kgwgk
4/16/2026 at 10:55:04 PM
LM Studio also offers curation, while giving credit to llama.cpp and also easy search across all of Huggingface's GGUF'sby Zetaphor
4/16/2026 at 9:06:49 AM
But if you’re just a GUI wrapper then at least attribute the library you created the GUI forby croes
4/16/2026 at 7:30:24 AM
but if ollama is much slower, that's cutting on your fun and you'll be having better fun with a faster GUIby FrozenSynapse
4/16/2026 at 8:47:30 AM
You’ve completely missed the point.by UqWBcuFx6NV4r
4/16/2026 at 8:00:26 AM
Whip that llama! Oh wait, that's a different program.by amelius
4/16/2026 at 8:11:13 AM
LOLby mech422