alt.hn

5/5/2026 at 4:09:17 AM

Train Your Own LLM from Scratch

https://github.com/angelos-p/llm-from-scratch

by kristianpaul

5/5/2026 at 5:22:08 AM

If you're interested in this resource, I highly recommend checking out Stanford's CS336 class. It covers all this curriculum in a lot more depth, introduces you into a lot of theoretical aspects (scaling laws, intuitions) and systems thinking (kernel optimization/profiling). For this, you have to do the assignments, of course... https://cs336.stanford.edu/

by jvican

5/5/2026 at 5:36:41 AM

how does one get the lectures? I don't see the option for any lectures.

by the_real_cher

5/5/2026 at 7:04:22 AM

One goes to youtube and searches for cs336?

by azangru

5/5/2026 at 7:30:37 AM

I did it back in the day when fast.ai was relatively new with ULMFiT. This must have been when Bert was sota. The architecture allows you to train a base and specialize with a head. I used the entire Wikipedia for the base and then some GBs of tweets I had collected through the firehouse. I had access to a lab with 20 game dev computers. Must have been roughly GTX 2080s. One training cycle took about half a day for the tokenized Wikipedia so I hyper parameter tuned by running one different setting on each computer and then moving on with the winner as the starting point for the next day. It was always fun to come to work the next morning and check the results.

The engineering was horrible and very ad-hoc but I learned a lot. Results were ok-ish (I classified tweets) but it gave me a good perspective on the sheer GPU power (and engineering challenges) one would need to do this seriously. I didn't fully grasp the potential of generating output but spent quite some time chuckling at generated tweets (was just curious to try it).

by kriro

5/5/2026 at 7:18:14 AM

Coincidentally, I just started on Build a Large Language Model (From Scratch), a repo/book/course by Sebastian Raschka [0][1][2]. Maybe it is a good problem to have to have to decide which learning resource to use.

[0] https://github.com/rasbt/LLMs-from-scratch

[1] https://www.manning.com/books/build-a-large-language-model-f...

[2] https://magazine.sebastianraschka.com/p/coding-llms-from-the...

by JoeDaDude

5/5/2026 at 10:36:25 AM

I really enjoyed the book. Great for people who want to understand the real nuts and bolts and have worked examples of all of the calculations.

by gchadwick

5/5/2026 at 6:16:54 AM

Been doing it since the day I was born. The beginnings were hard but I’m getting there.

by NSUserDefaults

5/5/2026 at 7:42:54 AM

You've actually been primarily training a physics model, with an LLM attached to it.

by hliyan

5/5/2026 at 12:44:03 PM

Good point, and I'm actually not sure that there is a clear dividing line. I expect that once we achieve capable world models and are able to analyze their internals, we'll find that the prediction mechanisms for purely physical and for verbal/behavioral responses to the agent's actions are at least partially colocated.

As particular motivation for my intuition, I expect that we had evolutionary pressure to adapt our defense mechanisms of predicting the movements of predators and prey, to handle human opponents.

by falcor84

5/5/2026 at 9:22:53 AM

I would start with linear algebra, some calculus and statistics and understand how a neural network - which really is just one type of ML - works, the learn the basics of CNN and RNN, then learn transformers and LLM.

But that is just me. I think is more useful to understand the how and whys before training a LLM.

by DeathArrow

5/5/2026 at 7:16:30 AM

Context: he is one of the MLX developers, a skilled ML researcher.

by antirez

5/5/2026 at 12:53:48 PM

Source? I think that's not correct.

by thrww26

5/5/2026 at 1:19:31 PM

Google the name of the author.

by antirez

5/5/2026 at 1:54:38 PM

I did. I think you are confusing him for someone else, so provide a source for your claim.

If you want to be snarky, it helps if you are right.

by thrww26

5/5/2026 at 2:06:26 PM

You are right, sorry the name is very similar and I thought it was: https://x.com/angeloskath

by antirez

5/5/2026 at 2:15:01 PM

I don't think GP was being snarky, how else would you expect someone to cite a name he recognized?

by dooglius

5/5/2026 at 4:47:58 PM

He was being snarky. He does actually end up citing who he thought the author was, and in doing so realized he was wrong.

He could have done that initially instead of saying "Google the name of the author."

by Maxatar

5/5/2026 at 6:05:40 PM

It was not my best (nor normal) behavior, but the point in this case is that the OP offered very little in his rebuttal. A more contextualized reply would have improved mine as well. I believe actually the person that published this LLM course on GitHub works at ElevenLabs, as Google shows. So the reply could be: "Are you sure? I googled and apparently he works for ElevenLabs". That would have triggered a different reply. So I was not polite enough, and I said sorry, but given the exchange to say "google it" was not terrible, was exactly how I thought I had found it (I google for the wrong name, but citing MLX, plus X, and Google returned the wrong result). So it was a metter of "I did this way".

by antirez

5/5/2026 at 6:46:33 AM

This looks like exact copy of this video of andrej karpathy ( https://youtu.be/kCc8FmEb1nY ) but in a writing format, am i wrong ?

by ofsen

5/5/2026 at 12:58:06 PM

The page describes its relationship to nanogpt.

...nanoGPT targets reproducing GPT-2 (124M params) and covers a lot of ground. This project strips it down to the essentials and scales it to a ~10M param model that trains on a laptop in under an hour...

by mellosouls

5/5/2026 at 9:38:04 AM

Yes, you are.

by drcongo

5/5/2026 at 5:43:43 PM

> A hands-on workshop where you write every piece of a GPT training pipeline yourself, understanding what each component does and why.

I see in dependencies torch, so most likely tensors and backpropagation are not implemented, but rather taken as granted. Does it count then as writing "from scratch"?..

I did something similar (in Rust, AI assisted), but I restricted myself not to use any dependency, only standard library. As result, I have to implement much more things, such as tensor design, kernels concept, simple gradient descent optimizer and even custom json parser, cpu data parallelism abstractions similar to rayon, etc. It was quite fun when I got everything wired and working - soo sloooow, but working.

by eiskalt

5/5/2026 at 5:28:52 AM

Train your LM from scratch*

I doubt you have a machine big enough to make it "Large".

by baalimago

5/5/2026 at 9:28:16 AM

If you have a credit card with a "normal" ceiling you probably can rent enough on neocloud providers like HuggingFace or Mistral Forge.

I'm not saying it's worth it but you don't need to buy a GPU yourself to be able to train.

by utopiah

5/5/2026 at 11:58:06 AM

This is the whole point of Karpathy's nanochat which OP refers to, to train a GPT-2 level LLM for under $100, renting an 8xH100 VM.

by busfahrer

5/5/2026 at 6:06:44 AM

You can fully train a 1.6b model on a single 3090. That’s a reasonably big model.

by mips_avatar

5/5/2026 at 6:41:57 AM

you can train it, but not fully

by electroglyph

5/6/2026 at 5:16:01 AM

I trained karpathys d28 1.6b nanochat on a 3090. Took an extremely long time but I did it.

by mips_avatar

5/5/2026 at 5:52:54 AM

Hey now! I've got a half terabyte of RAM at my disposal! I mean, it's DDR4 but... it's RAM!

And it's paired with 48 processor cores! I mean, they don't even support AVX512 but they can do math!

I could totally train a LLM! Or at least my family could... might need my kid to pick up and carry on the project.

But in all seriousness... you either missed the point, are being needlessly pedantic, or are... wrong?

This is about learning concepts, and the rest of this is mostly moot.

On the pedantic or wrong notes--What is the documented cut-off for a "large" language model? Because GPT-2 was and is described as a "large" language model. It had 1.5B parameters. You can just about get a consumer GPU capable of training that for about $400 these days.

by nucleardog

5/5/2026 at 8:16:54 AM

Yeah it's just a semantic pet peeve. Let me ask you this: What is a "Language Model", if this is a "Large Language Model"? Inversely, if a 1.5B model is "Large" then what is the recent 1T param models? "Superlarge"?

In my own very humble opinion, it becomes "Large" when it's out of non-specialized hardware. So currently, a model which requires more than 32GB vram is large (as that's roughly where the high-end gaming GPUs cut off).

And btw, there is no way you can train a language model on a CPU, even with ddr5, lest you wait a whole week for a single training cycle. Give it a go! I know I did, it's a magnitude away from being feasible.

by baalimago

5/5/2026 at 1:33:25 PM

> Yeah it's just a semantic pet peeve.

I'm not sure. Microsoft calls Phi-4 a small language model, so the distinction is considered meaningful to some people working in the space. My own view is that the term "LLM" implies something about the capabilities of the model in 2026. Maybe there's not a hard definition of the term, but whatever the definition is, the model in the article wouldn't make it.

by bachmeier

5/5/2026 at 12:35:52 PM

Calling anything "large" in computing is problematic since hardware keeps improving. GPT-1 was an LLM in 2017 and had 117M parameters, when did it stop being large?

GPT would have been a better term than LLM, but unfortunately became too associated with OpenAI. And then, what about non-transformer LLMs? And multimodal LLMs?

Maybe we should just give up, shrug and call it "AI".

by joefourier

5/5/2026 at 5:16:08 PM

> Yeah it's just a semantic pet peeve. Let me ask you this: What is a "Language Model", if this is a "Large Language Model"? Inversely, if a 1.5B model is "Large" then what is the recent 1T param models? "Superlarge"?

Sure, we could do it like we did radio frequencies! Most of what we use are "High Frequency" and above... Very High Frequency, Ultra High Frequency, Super High Frequency, Extremely High Frequency.

> In my own very humble opinion, it becomes "Large" when it's out of non-specialized hardware. So currently, a model which requires more than 32GB vram is large (as that's roughly where the high-end gaming GPUs cut off).

So the definition shifts over time based on the market availability of RAM? And can also go backwards? I can't really see anyone bothering to look up the state of the GPU market in order to determine correct terminology whenever they want to talk about this stuff (or interpret old comments, or...).

That also decouples the terminology from the actual capabilities which is what people are generally more interested in. GPT-3 was a "large" language model at this present time. However the the seemingly much more capable Gemma 4 was a large language model at the time GPT-3 was in use, but isn't a large language model right now.

I kinda question the arbitrary line drawn here too--32GB VRAM? Where I am that's a ~$5-6k problem. I'm not sure I'd call that a "consumer" product any more than the $20k data center cards regardless of the OEM intent, but we could argue semantics on that one too.

Fundamentally, defining it this way just seems kind of... useless? It's borderline a meaningless modifier already. This just defines it in a way that's so complex to use or interpret that it's just meaningless in a different way.

For what it's worth, I'd vote to use "large" to mean "big enough to be general purpose", more differentiating from the small, specialized models that came before.

> And btw, there is no way you can train a language model on a CPU, even with ddr5, lest you wait a whole week for a single training cycle. Give it a go! I know I did, it's a magnitude away from being feasible.

Yeah, was mostly being silly--tried to allude to that with the "intergenerational project" comment toward the end there.

Though I _did_ try doing some inference on CPU, which is how I found out that these Xeons I have don't implement AVX512. Surprisingly Gemma 4 (2B) was able to spit out a solid 13-14 tok/s! Was expecting more like... 0.13.

by nucleardog

5/5/2026 at 7:07:52 AM

Then rewrite the title and call it "learn how to do a non usable llm from scratch"

by Malcolmlisk

5/5/2026 at 7:25:03 AM

Opus 4.7 is non-usable for the tasks I have — but it’s considered an LLM.

And no one is stopping anyone from tweaking few parameters in this repo to go above 10M parameters.

by improbableinf

5/5/2026 at 12:00:35 PM

What tasks is it non-usable for?

by skinfaxi

5/5/2026 at 5:48:36 AM

Nice. What scale does this realistically reach on a single machine?

by hiroakiaizawa

5/5/2026 at 6:42:02 AM

Model: 36L/36H/576D, 144.2M params

runs on a Blackwell 6000 Max-Q, using 86GB VRAM. Training supposedly takes 3h40m

by lynx97

5/5/2026 at 7:15:30 AM

The documentation is really helpful enough to get started

by steveharing1

5/5/2026 at 9:36:51 AM

If someone is interested, I am giving short courses with walkthrough on how to train you LLM from scratch via AI Study Camp.

by fabian_shipamax

5/5/2026 at 4:55:48 AM

This looks great for a first introduction to training LLMs, and it looks simple enough to try this locally. Great job!

by iamnotarobotman

5/5/2026 at 3:29:06 PM

I'm not sure using pytorch counts as "from scratch" anymore. I'm not saying you should avoid the stdlib or anything crazy, but at the point where you're pulling in for-purpose libraries it really doesn't seem like "from scratch" to me.

by yoklov

5/5/2026 at 4:14:58 PM

Point taken, but I think to most ML folks, PyTorch basically is the stdlib.

by wrs

5/5/2026 at 4:23:03 PM

Can anyone suggest or come up with viable "use cases" of a custom LLM like this? I wouldn't mind giving it a try but ideally I'm looking for something that is not just a toy.

by borplk

5/5/2026 at 11:53:59 AM

This is a really interesting direction. Thanks for sharing!

by Miles_Stone

5/5/2026 at 2:02:16 PM

That's interesting, UI is good

by reviewyourai

5/5/2026 at 9:46:39 AM

[flagged]

by Ozzie-D

5/5/2026 at 9:21:53 AM

I know it's a bit of a joke, but "I Built a Neural Network from Scratch in SCRATCH" gave me, a complete outsider, a lot of insight into how neural networks work.

https://www.youtube.com/watch?v=5COUxxTRcL0

by rithdmc

5/5/2026 at 10:11:32 AM

[flagged]

by gbkgbk

5/5/2026 at 7:42:26 AM

[flagged]

by flowdesktech

5/5/2026 at 7:58:52 AM

That’s actually super interesting

by yjaspar