alt.hn

3/27/2026 at 12:11:12 PM

TinyLoRA – Learning to Reason in 13 Parameters

https://arxiv.org/abs/2602.04118

by sorenjan

4/1/2026 at 5:35:35 AM

Not sure if I buy it. First, SVD decomposition to obtain U, Σ, V is computationally expensive, so it would work only if we are not finetuning very big models.

But my real concern comes at the results. The "13 parameters" looks like bait, because it is one result of finetuning a model on a very simple math benchmark, grade-school-math (GSM8K), an already very saturated benchmark on every model. Besides, it seems to happen only for the qwen family model... It looks like GSM8K was part of the training set of the qwen model, and this tinylora finetuning did the last adjustments to perfectly reflect that overtraining.

by dollo_7

4/1/2026 at 6:01:26 AM

Fair points, especially on GSM8K saturation and Qwen possibly already sitting close to the solution. That said, even if this is mostly "last-mile alignment", the fact that it can be done with such a tiny signal is still interesting, it suggests the gap between capability and behavior might be much smaller (and cheaper to bridge) than we assume.

by sachaa

4/1/2026 at 3:06:42 PM

> the gap between capability and behavior might be much smaller

Can you elaborate a bit on what you mean with the gap?

by endofreach

4/2/2026 at 3:51:34 AM

[dead]

by romerocruzsa

4/1/2026 at 3:16:16 PM

I've done a lot of exploratory work with Stable Diffusion LoRAs, and I actually do buy that there's some juice here, though it's almost certainly not nearly as good as other techniques can be. In particular, this technique will likely avoid the intruder dimension problem which plagues naive LoRA. SVD is expensive, but you only have to do it once at the beginning of training.

I haven't done much research lately, but when I was working on it, I was having substantial success training an adapter of the form U_k @ P @ A, where U_k was the top k left singular vectors of the underlying weight, and then P and A were your typical LoRA projection matrices.

The 13 parameters are kind of misleading here; the real juice is going to be in the P_i fixed random matrices. My suspicion is that they are overfitting to the benchmark, but they almost certainly are observing a real gain in model capacity that is largely due to avoiding the intruder dimension problem.

by cheald

4/1/2026 at 12:41:31 PM

They're using the truncated SVD, not the full variant, that's computationally cheaper.

by sorenjan

4/1/2026 at 5:57:30 AM

Yeah, my big problem with the paper is it just might be an artifact of qwen's training process.

by robrenaud

4/1/2026 at 10:39:27 AM

In all fairness most of the unique stuff I can do is probably an artifact of my training process, so it seems unfair to deny an LLM the same accomodation.

by taneq

4/1/2026 at 1:25:42 PM

How much did your training cost society?

by nativeit

4/1/2026 at 4:24:06 PM

This got me thinking, and it might actually even be a comparable amount. Let's estimate 12 years of schooling run at minimum $100,000 per student, at least in the US [1], and then add onto that number whatever else you may do after that, i.e. a bunch more money if paid (college) or "unpaid" (self-taught skills and improvements) education, and then the likely biggest portion for white-collar workers, yet hard-to-quantify, in experience and "value" professional work will equip one with.

Now divide the average SOTA LLM's training cost (or a guess, since these numbers aren't always published as far as I'm aware) by the number of users, or if you wanted to be more strict, the number of people it's proven to be useful for (what else would training be for), and it might not be so far off anymore?

Of course, whether it makes sense to divide and spread out the LLMs' costs across users in order to calculate an "average utility" is debatable.

[1] https://www.publicschoolreview.com/average-spending-student-...

by msdz

4/1/2026 at 6:28:34 AM

Is it an Aprils Fools publication?

by MASNeo

4/1/2026 at 10:50:25 AM

This hit me too hard.

by darkxanthos

4/2/2026 at 1:59:31 AM

It's not "13 parameters to reason", they just rotated the full 8B parameter space in 13 dimensions and found a rotation that was still able to reason.

Depending on the latent structure, it's possible a nice rotation that would be perfect for some one specific problem, but you still got to search for it, and it's not a guarantee to exist.

But it's a nice step towards LLM parameter-space interpretability.

by 5555watch

4/1/2026 at 5:34:48 AM

>One theory is that the knowledge required to solve the task is already stored in the parameters of the model, and only the style has to change for task success

>In particular, learning to generate longer outputs may be possible in few parameters

Reminded me of: https://arxiv.org/abs/2501.19393

>we develop budget forcing to control test-time compute by forcefully terminating the model’s thinking process or lengthening it by appending “Wait” multiple times to the model’s generation when it tries to end. This can lead the model to double-check its answer, often fixing incorrect reasoning steps

Maybe, indeed, the model simply learns to insert the EOS token (or similar) later, and the capability is already in the base model

by kgeist

4/1/2026 at 1:49:24 PM

This is interesting and all, but “LoRA” is painfully close to “LoRa” (which is related to radio networking, not AI) when just scanning a list of topics. We’re never going to beat the Shannon limit on acronyms and initialisms.

I’m glad the rest of the anchor text gave some context.

by cestith

4/1/2026 at 2:05:49 PM

A version of this comment is posted in all submissions about Low Rank Adapters. I don't see how "Learning to reason in 13 parameters" would apply to low power radio communication, so it's even less relevant this time.

> Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting.

https://news.ycombinator.com/newsguidelines.html

by sorenjan

4/1/2026 at 2:26:51 PM

> I’m glad the rest of the anchor text gave some context.

I’m sorry if that reads like a complaint.

by cestith

4/1/2026 at 2:01:39 PM

I see this comment on every single LoRA post despite the vast majority of posts being about LoRA not LoRa. Can we please stop beating this dead horse?

by jorlow

4/1/2026 at 2:10:37 PM

Never heard of the radio thing. I suspect LoRA has already eclipsed LoRa in general usage. It's probably more appropriate to complain on a LoRa post that it's too close to LoRA.

by ticulatedspline

4/1/2026 at 3:37:54 PM

And I'm the exact opposite. I never heard about LoRA, but I have used LoRa and was curious to see what it had to do with reasoning.

It's just an unfortunate name collision: disambiguating by use of capitals only works with computers.

by HeyLaughingBoy

4/1/2026 at 4:53:22 PM

Likely reasoning is part of the original model. It is well known that it is not possible to get a 1bn parameter model to reason, even with RL.

by ashater

4/1/2026 at 6:30:32 AM

If i understand it correctly, the analogy could be:

Let's say we have a low level programmer expert and we try to teach him algebra either we:

  - (SFT): give him algebra book with new nomenclature, definitions, syntax
  - (RL): let him learn algebra using C syntax

by Xx_crazy420_xX

4/1/2026 at 11:47:47 AM

I don't think so.

Fine tuning works on an input/output basis. You are rewarded for producing a plausible output _now_.

RL rewards you later for producing the right output now. So you have to learn to generate a lot of activity but you are only rewarded if you end up at the right place.

In SFT you are rewarded for generating tokens plausible to the proof, one token at a time. In RL you are expected to generate an entire proof and then you are rewarded or punished only when the proof is done.

by nathan_compton

4/1/2026 at 2:01:15 AM

The quality of custom models trained with proper reasoning datasets[0] even with small parameters (3-7B is sweet spot) is incredible now

[0]: cartesien.io or Salesforce's WebscaleRL

by a-t-c-g

4/1/2026 at 2:45:57 AM

What are you basing how good they are on? Personal experience or some benchmarks?

by objektif

4/1/2026 at 3:55:36 AM

Benchmarks, we have internal ones testing reasoning fine-tuned v/s frontier + prompts

For some use cases it can be parity performance at 1/20th the cost up to exceeds at 1/10th the cost. Trade-off is ofc narrow applicability

by a-t-c-g

4/1/2026 at 12:51:15 PM

How can I learn more about these models? Are they open source?

by objektif

4/1/2026 at 5:25:59 PM

there are plenty of OSS finetuned models + base models around. If you're looking for doing these on your own dataset, worth getting in touch with cartesien.io or wire up https://github.com/SalesforceAIResearch/PretrainRL-pipeline

by a-t-c-g

4/1/2026 at 7:13:18 PM

Thank you.

by objektif

4/1/2026 at 1:08:22 AM

With four parameters I can fit an elephant, and with five I can make him wiggle his trunk so there is still room for improvement.

by measurablefunc

4/1/2026 at 1:20:38 AM

Except learning to reason is a far cry from curve fitting. Our brains have more than five parameters.

by esafak

4/1/2026 at 2:11:46 AM

After a quick content browse, my understanding is this is more like with a very compressed diff vector, applied to a multi billion parameter model, the models could be 'retrained' to reason (score) better on a specific topic , e.g. math was used in the paper

by voxelghost

4/1/2026 at 4:08:49 AM

It's the statistics equivalent of 'no one needs more than 640kb of RAM'

by sdenton4

4/1/2026 at 1:29:28 PM

My very first PC was a Packard Bell with 640KB of RAM. If I’d known, I’d have saved all my RAM for retirement…

by nativeit

4/1/2026 at 2:10:32 AM

speak for yourself!

by ekuck

4/1/2026 at 1:54:37 AM

reasoning capability might just be some specific combinations of mirror neurons.

even some advanced math usually evolves applying patterns found elsewhere into new topics

by est

4/1/2026 at 2:30:23 AM

I agree, I don't think gradient descent is going to work in the long run for the kind of luxurious & automated communist utopia the technocrats are promising everyone.

by measurablefunc

4/1/2026 at 11:24:12 AM

Most data in the training set of most reasoning models is crap I guess.

by vasco

4/1/2026 at 6:59:35 PM

Can a model that small dynamically grow? In other words, can it train itself AS it progresses through the network?

by nekusar

4/1/2026 at 3:20:48 AM

[dead]

by Sim-In-Silico

4/1/2026 at 10:03:33 AM

[dead]

by evermore611

4/1/2026 at 5:01:31 AM

[dead]

by ValveFan6969

4/1/2026 at 1:56:39 AM

[dead]

by ValveFan6969

4/1/2026 at 1:16:14 PM

[flagged]

by volume_tech

4/2/2026 at 2:56:04 AM

Thats a wonderful explanation (and roughly the conclussion I arrived at after browsing the paper), I just wish it would have been in the original post.

by voxelghost

4/1/2026 at 3:54:29 AM

Such low dimensionality of the LoRA vector must surely result in a close-to-linear modification to the KV calculation. This seems to me to imply that what we call "reasoning" is latent within the model. Pretty clear I didn't read the paper, I'm sure the authors address this.

by matt123456789

4/1/2026 at 3:58:47 AM

Yes - some degree of reasoning appears to be latent in the structure of language itself. But models trained explicitly on reasoning-focused data still perform better than models trained only on general corpora.*

*At least up to 300B parameters, based on the models we’ve tested.

by a-t-c-g

4/1/2026 at 3:07:50 PM

I wonder what the relationships between the grammar of a language, what it can compute, how it encodes, and what the minimal parameters/structure for reasoning looks like...

by crawfordcomeaux

4/1/2026 at 5:53:09 AM

If 13 parameters can unlock better reasoning, then we will not be "training" models, we'll be steering them. Most of the capability is already there.

The real unlock isn’t TinyLoRA, it’s what this implies: ultra-cheap, continuous adaptation. The bottleneck shifts from compute to having a good reward signal.

by sachaa