alt.hn

4/19/2025 at 1:47:15 PM

Inferring the Phylogeny of Large Language Models

https://arxiv.org/abs/2404.04671

by weinzierl

4/19/2025 at 3:59:02 PM

Intuitive and expected result (maybe without the prediction of performance). I'm glad somebody did the hard work of proving it.

Though, if this is so clearly seen, how come AI detectors perform so badly?

by PunchTornado

4/19/2025 at 5:29:33 PM

This experiment involves each LLM responding to 128 or 256 prompts. AI detection is generally focused on determining the writer of a single document, not comparing two analagous sets of 128 documents and determining if the same person/tool wrote both. Totally different problem.

by Calavar

4/19/2025 at 4:38:29 PM

It might be because detecting if output is AI generated and mapping output which is known to be from an LLM to a specific LLM or class of LLMs are different problems.

by haltingproblem

4/19/2025 at 7:07:31 PM

They're discovering the wrong thing. And the analogy with biology doesn't hold.

They're sensitive not to architecture but to training data. That's like grouping animals by what environment they lived in, so lions and alligators are closer to one another than lions and cats.

The real trick is to infer the underlying architecture and show the relationships between architectures.

That's not something you can tell easily by just looking at the name of the model. And that would actually be useful. This is pretty useless.

by light_hue_1

4/23/2025 at 7:39:57 AM

You are the one making a wrong biological analogy. Architecture isn't comparable to genes any more than training data is comparable to genes, and training data isn't comparable to environment, doing these kind of analogies brings you nothing but false confidence and misunderstanding.

What they do in the paper on the other hands is to apply the methods of biology, and get a result that is akin to phylogeny, not from a biological analogy but from a biologically-inspired method.

by littlestymaar

4/19/2025 at 9:09:08 PM

This is provocative but off-base in order to be so: why would we need to work backwards to determine architecture?

Similarly, "you can tell easily by just looking at the name of the model" -- that's an unfounded assertion. No, you can't. It's perfectly cromulent, accepted, and quite regular to have a fine-tuned model that has nothing in its name indicating what it was fine-tuned on. (we can observe the effects of this even if we aren't so familiar with domain enough to know this, i.e. Meta in Llama 4 making it a requirement to have it in the name)

by refulgentis