alt.hn

1/13/2025 at 8:51:29 AM

Sky-T1: Train your own O1 preview model within $450

https://novasky-ai.github.io/posts/sky-t1/

by fofoz

1/13/2025 at 2:46:09 PM

> We initially trained a 32B model using 3–4K math problems from the Numina dataset (provided by STILL-2), achieving a significant improvement in AIME24 accuracy from 16.7% to 43.3%. However, when we incorporated coding data generated from the APPs dataset into the training process, AIME24 accuracy dropped to 36.7%. We hypothesize that this decline is due to the distinct reasoning approaches required for math and coding tasks.

This is interesting. For large models that were trained on much more data. I wonder if o1 is trained in a different way that GPT-4o. Do they only rely on synthetic data (plus some hand crafted datasets). But then how would O1 knows a lot of facts like GPT-4o indicating that these were in the training.

Can someone with more understanding and knowledge weight on this?

by elashri

1/13/2025 at 7:54:44 PM

They finetuned QwQ to perform well on a benchmark. For the past two years there has been a constant stream of "X fine-tune beats Y closed model on Z benchmark". This isn't interesting and has never been interesting, see Goodhart's Law. If you're actually using local models day to day you will quickly find that finetunes are almost universally a waste of time. Even when it comes to something like smutty roleplay gaslighting a model can often lead to more interesting and consistent results because finetunes are basically always overfit to the training data.

by thot_experiment

1/13/2025 at 5:09:31 PM

Fine tune an existing model. Training such a model so cheaply would've been nuts.

by zamadatix

1/13/2025 at 7:19:19 PM

For math only.

by ipsum2