> Yeah it's just a semantic pet peeve. Let me ask you this: What is a "Language Model", if this is a "Large Language Model"? Inversely, if a 1.5B model is "Large" then what is the recent 1T param models? "Superlarge"?Sure, we could do it like we did radio frequencies! Most of what we use are "High Frequency" and above... Very High Frequency, Ultra High Frequency, Super High Frequency, Extremely High Frequency.
> In my own very humble opinion, it becomes "Large" when it's out of non-specialized hardware. So currently, a model which requires more than 32GB vram is large (as that's roughly where the high-end gaming GPUs cut off).
So the definition shifts over time based on the market availability of RAM? And can also go backwards? I can't really see anyone bothering to look up the state of the GPU market in order to determine correct terminology whenever they want to talk about this stuff (or interpret old comments, or...).
That also decouples the terminology from the actual capabilities which is what people are generally more interested in. GPT-3 was a "large" language model at this present time. However the the seemingly much more capable Gemma 4 was a large language model at the time GPT-3 was in use, but isn't a large language model right now.
I kinda question the arbitrary line drawn here too--32GB VRAM? Where I am that's a ~$5-6k problem. I'm not sure I'd call that a "consumer" product any more than the $20k data center cards regardless of the OEM intent, but we could argue semantics on that one too.
Fundamentally, defining it this way just seems kind of... useless? It's borderline a meaningless modifier already. This just defines it in a way that's so complex to use or interpret that it's just meaningless in a different way.
For what it's worth, I'd vote to use "large" to mean "big enough to be general purpose", more differentiating from the small, specialized models that came before.
> And btw, there is no way you can train a language model on a CPU, even with ddr5, lest you wait a whole week for a single training cycle. Give it a go! I know I did, it's a magnitude away from being feasible.
Yeah, was mostly being silly--tried to allude to that with the "intergenerational project" comment toward the end there.
Though I _did_ try doing some inference on CPU, which is how I found out that these Xeons I have don't implement AVX512. Surprisingly Gemma 4 (2B) was able to spit out a solid 13-14 tok/s! Was expecting more like... 0.13.