alt.hn

1/22/2026 at 4:31:47 PM

Show HN: Text-to-video model from scratch (2 brothers, 2 years, 2B params)

https://huggingface.co/collections/Linum-AI/linum-v2-2b-text-to-video

by schopra909

1/23/2026 at 6:31:06 PM

Very cool, especially given that it’s a two person team. I will be checking this out on the weekend.

Also I’m super curious on how you’re attempting to have more realistic physics with post training.

by tariqshams

1/22/2026 at 11:22:26 PM

Great work. How many GPU hours to train?

by WhitneyLand

1/23/2026 at 6:24:37 AM

That’s amazing effort - I am impressed.

Awesome to see more small teams making impressive leaps.

by convivialdingo

1/23/2026 at 8:25:33 AM

I want to build my own video model, just for learning purposes, is there any course which can teach end to end

by taherchhabra

1/23/2026 at 2:44:55 PM

I think YC just release video on the basics of diffusion, but honestly I don’t have a good end to end guide.

We’re going to write up going 0->1 on a video model (all the steps) over the coming months. But it likely won’t be a class or anything like that.

https://www.linum.ai/field-notes

We want to share our learnings with folks who are curious about the space - but don’t have time to make it a full class experience.

Hopefully karpathy does that with his courses in the future!

by schopra909

1/23/2026 at 7:45:55 PM

> I want to build my own video model, just for learning purposes

Sorry, it might sound like a cliche, but try that as a prompt to a deep thinking and learning model, and see what comes out.

An expensive option: Look at Project #5 at https://bytebyteai.com/

by mandeepj

1/23/2026 at 5:49:11 AM

Incredibly impressive, dudes. Well done.

by popalchemist

1/23/2026 at 1:36:30 PM

> We kept a “lab notebook” of all our experiments in Notion

Couldn't find a link to this, is this public?

by whywhywhywhy

1/23/2026 at 2:40:39 PM

Not public yet — we’re going to clean it up so it’s readable and release it as blog posts. First one will be everything you need to know on building a VAE for image and video. Should be out in a few weeks. We’re figuring out the write balance between spending time writing and all the work we have on our plate for the next model.

If you’re interested in this stuff, keep an eye on field notes (our blog).

by schopra909

1/23/2026 at 4:55:06 AM

How much compute was ultimately required to get this done?

by throwaway314155

1/22/2026 at 9:16:06 PM

Post it on r/StableDiffusion

by E-Reverance

1/24/2026 at 10:42:00 AM

Nice work. Are you guys on X?

by glohbalrob

1/23/2026 at 6:00:06 AM

[dead]

by Jack_a11y

1/22/2026 at 5:13:29 PM

Rad! huggingface link gives 404 on my side though.

by streamer45

1/22/2026 at 5:16:00 PM

Oh damn! Thanks for catching that -- going to ping the HF folks to see what they can do to fix the collection link.

In the meantime here's the individual links to the models:

https://huggingface.co/Linum-AI/linum-v2-720p https://huggingface.co/Linum-AI/linum-v2-360p

by schopra909

1/22/2026 at 5:35:04 PM

Looks like 20GB VRAM isn't enough for the 360p demo :( need to bump my specs :sweat_smile:

by streamer45

1/22/2026 at 5:18:34 PM

Should be fixed now! Thanks again for the heads up

by schopra909

1/22/2026 at 5:20:23 PM

All good, cheers!

by streamer45

1/22/2026 at 5:45:52 PM

Per the RAM comment, you may able to get it run locally with two tweaks:

https://github.com/Linum-AI/linum-v2/blob/298b1bb9186b5b9ff6...

1) Free up the t5 as soon as the text is encoded, so you reclaim GPU RAM

2) Manual Layer Offloading; move layers off GPU once they're done being used to free up space for the remaining layers + activations

by schopra909

1/22/2026 at 11:27:54 PM

Any idea on the minimum VRAM footprint with those tweaks? 20GB seems high for a 2B model. I guess the T5 encoder is responsible for that.

by dsrtslnd23

1/23/2026 at 1:15:26 AM

T5 Encoder is ~5B parameters so back of the envelope would be ~10GB of VRAM (it's in bfloat16). So, for 360p should take ~15 GB RAM (+/- a few GB based on the duration of video generated).

We can update the code over the next day or two to provide the option for delete VAE after the text encoding is computed (to save on RAM). And then report back the GB consumed for 360p, 720p 2-5 seconds on GitHub so there are more accurate numbers.

Beyond the 10 GB from the T5, there's just a lot of VRAM taken up by the context window of 720p video (even though the model itself is 2B parameters).

by schopra909

1/23/2026 at 7:48:47 AM

The 5B text encoder feels disproportionate for a 2B video model. If the text portion is dominating your VRAM usage it really hurts the inference economics.

Have you tried quantizing the T5? In my experience you can usually run these encoders in 8-bit or even 4-bit with negligible quality loss. Dropping that memory footprint would make this much more viable for consumer hardware.

by storystarling

1/23/2026 at 2:49:31 PM

That all being said, you can just delete the T5 from memory after encoding the text so save on memory.

The 2B parameters will take up 4 Gb of memory but activations will be a lot more given size of context windows for video.

A 720p 5 second video is roughly 100K tokens of context

by schopra909

1/23/2026 at 2:47:08 PM

Great idea! We haven’t tried it but def interested to see if that works as well.

When we started down this path, T5 was the standard (back in 2024).

Likely won’t be the text encoder for subsequent models, given its size (per your point) and age

by schopra909

1/24/2026 at 12:52:18 AM

[flagged]

by hackomorespacko