2/6/2026 at 5:15:34 PM
For the visual learners, here's a classic intro to how LLMs work: https://bbycroft.net/llmby helloplanets
2/3/2026 at 2:49:11 PM
by surprisetalk
2/6/2026 at 5:15:34 PM
For the visual learners, here's a classic intro to how LLMs work: https://bbycroft.net/llmby helloplanets
2/6/2026 at 5:31:47 PM
Lovely visualization. I like the very concrete depiction of middle layers "recognizing features", that make the whole machine feel more plausible. I'm also a fan of visualizing things, but I think its important to appreciate that some things (like 10,000 dimension vector as the input, or even a 100 dimension vector as an output) can't be concretely visualized, and you have to develop intuitions in more roundabout ways.I hope make more of these, I'd love to see a transformer presented more clearly.
by tpdly
2/6/2026 at 4:25:51 PM
This is just scratching the surface -- where neural networks were thirty years ago: https://en.wikipedia.org/wiki/MNIST_databaseIf you want to understand neural networks, keep going.
by esafak
2/7/2026 at 12:37:06 AM
Which, if you are trying to learn the basics, is actually a great place to start ...by abrookewood
2/5/2026 at 5:30:35 AM
The original Show HN, https://news.ycombinator.com/item?id=44633725by brudgers
2/7/2026 at 3:34:16 AM
- while impressive, it still doesnt tell me why a neural network is architected the way it is and that my bois is where this guy comes in https://threads.championswimmer.in/p/why-are-neural-networks...- make a visualization of the article above and it would be the biggest aha moment in tech
by vivzkestrel
2/7/2026 at 6:50:39 AM
Really cool. The animations within a frame work well.by droidist2
2/6/2026 at 11:31:25 PM
This Welch Labs video is very helpful: https://www.youtube.com/watch?v=qx7hirqgfuUby swframe2
2/7/2026 at 12:58:49 AM
Super cool visualization Found this vid by 3Blue1Brown super helpful for visualizing transformers as well. https://www.youtube.com/watch?v=wjZofJX0v4M&t=1198sby chan1
2/7/2026 at 6:23:55 AM
Their series on LLMs, neural nets, etc., is amazing.by bilbo-b-baggins
2/7/2026 at 5:54:56 AM
I like the CRT-like filter effect.by vicentwu
2/6/2026 at 5:36:39 PM
I like the style of the site it has a "vintage" lookDon't think it's moire effect but yeah looking at the pattern
by ge96
2/6/2026 at 8:37:50 PM
Lucky you!by Bengalilol
2/6/2026 at 8:55:49 PM
Oh god my eyes! As it zooms in (ha)That's cool, rendering shades in the old days
Man those graphics are so good damn
by ge96
2/6/2026 at 7:36:37 PM
Oh wow, this looks like a 3d render of a perceptron when I started reading about neural networks. I guess essentially neural networks are built based on that idea? Inputs > weight function to to adjust the final output to desired values?by 8cvor6j844qw_d6
2/7/2026 at 1:15:29 AM
The layers themselves are basically perceptrons, not really any different to a generalized linear model.The ‘secret sauce’ in a deep network is the hidden layer with a non-linear activation function. Without that you could simplify all the layers to a linear model.
by mr_toad
2/6/2026 at 11:38:56 PM
A neural network is basically a multilayer perceptronby sva_
2/6/2026 at 10:05:02 PM
Yes, vanilla neural networks are just lots of perceptronsby adammarples
2/6/2026 at 7:48:29 PM
I love this visual article as well:by jazzpush2
2/6/2026 at 3:45:31 PM
Great explanation, but the last question is quite simple. You determine the weights via brute force. Simply running a large amount of data where you have the input as well as the correct output (handwriting to text in this case).by 4fterd4rk
2/6/2026 at 4:35:44 PM
"Brute force" would be trying random weights and keeping the best performing model. Backpropagation is compute-intensive but I wouldn't call it "brute force".by ggambetta
2/6/2026 at 4:58:11 PM
"Brute force" here is about the amount of data you're ingesting. It's no Alpha Zero, that will learn from scratch.by Ygg2
2/6/2026 at 7:49:43 PM
What? Either option requires sufficient data. Brute force implies iterating over all combinations until you find the best weights. Back-prop is an optimization technique.by jazzpush2
2/7/2026 at 4:43:45 AM
In context of grandparents post. > You determine the weights via brute force. Simply running a large amount of data where you have the input as well as the correct output
Brute force just means guessing all possible combinations. A dataset containing most human knowledge is about as brute force as you can get.I'm fairly sure that Alpha Zero data is generated by Alpha Zero. But it's not an LLM.
by Ygg2
2/6/2026 at 8:56:37 PM
Spent 10 minutes on the site and I think this is where I'll start my day from next week! I just love visual based learning.by jetfire_1711
2/6/2026 at 6:08:28 PM
This visualizations reminds me of the 3blue1brown videos.by cwt137
2/6/2026 at 6:12:02 PM
I was thinking the same thing. Its at least the same description.by giancarlostoro
2/6/2026 at 10:22:43 PM
As someone who does not use Twitter, I suggest adding RSS to your site.by shrekmas
2/7/2026 at 2:53:14 AM
Nice workby atultw
2/6/2026 at 7:15:51 PM
I get 3fps on my chrome, most likely due to disabled HW accelerationby artemonster
2/6/2026 at 7:27:17 PM
High FPS on Safari M2 MBP.by nerdsniper
2/6/2026 at 6:53:44 PM
Nice visuals, but misses the mark. Neural networks transform vector spaces, and collect points into bins. This visualization shows the structure of the computation. This is akin to displaying a Matrix vector multiplication in Wx + b notation, except W,x,and b have more exciting displays.It completely misses the mark on what it means to 'weight' (linearly transform), bias (affine transform) and then non-linearly transform (i.e, 'collect') points into bins
by anon291
2/6/2026 at 7:28:00 PM
> but misses the markIt doesn't match the pictures in your head, but it nevertheless does present a mental representation the author (and presumably some readers) find useful.
Instead of nitpicking, perhaps pointing to a better visualization (like maybe this video: https://www.youtube.com/watch?v=ChfEO8l-fas) could help others learn. Otherwise it's just frustrating to read comments like this.
by titzer
2/6/2026 at 6:37:41 PM
Great visualization!by pks016
2/6/2026 at 5:45:49 PM
very cool stuffby javaskrrt