alt.hn

2/23/2026 at 6:07:25 PM

Detecting and Preventing Distillation Attacks

https://www.anthropic.com/news/detecting-and-preventing-distillation-attacks

by meetpateltech

2/23/2026 at 6:46:33 PM

Claiming they have the unrestricted right to scrape whatever information they want off the internet but complaining about it when others do to you and bringing out the 'China bad' card, just ironic

by cherryteastain

2/23/2026 at 6:48:14 PM

It's not about rights, it's about capabilities, just like any other adversarial scenario between nonlawyers.

by direwolf20

2/23/2026 at 10:50:27 PM

This violates the ToS, but I don't think it's distillation. Distillation requires knowing the logits, which current API does not provide. This is just synthetic data generation. Anthropic definitely knows the difference.

by WiSaGaN

2/23/2026 at 11:32:57 PM

Yes, it is annoying that companies keep calling it “distillation” when it’s really imitation learning. In fact the closest analogy is probably more like “scraping” which is pretty ironic.

by janalsncm

2/23/2026 at 9:31:20 PM

With OAI and Gemini already having anti-distillation measures for quite a while now, I thought Anthropic was purposefully letting Chinese labs distill in hopes that it would improve their safety and alignment by default (at least closer to Claude's level).

Apparently not. (Or not anymore.)

It's not like they can actually prevent distillation anyways even by hiding the thinking output, since you can just turn extended thinking off and all current Claude models will switch to thinking in the open (non-reasoning output) instead whenever it encounters a hard agentic task. So all it takes for distillation to continue to happen is for some real users to sell a competing AI lab their real usage trajectory data which is entirely undetectable by definition, and many people would probably be glad to do it.

by 2001zhaozhao

2/23/2026 at 10:26:40 PM

Whatever you think of the ethics of doing this, it does hurt the reputation of the follower labs in my mind. If their capabilities can't exist without the work of the frontier labs, they're less equal competitors and more the guys trying to sell you a shoddy knockoff. Not that there's no use case for shoddy knockoffs.

by resfirestar

2/23/2026 at 11:50:23 PM

It’s not that capabilities could not exist without the original work. It’s more that the shortest path between A and B isn’t repeating all of the same work.

Further, although media likes to depict Chinese labs as “just copying” I think there’s a ton of hubris involved. First of all, American labs are filled with Chinese who are trained at the very same schools as Chinese labs. Second, if you look at the contributions from Chinese labs many have pushed the state of the art.

Zooming out, data is kind of an arbitrary line to draw. Anthropic didn’t invent the neural network, back propagation, or the transformer. They didn’t invent all of the post training techniques they are using. They might even be using some pretrained open models during pre training and data prep. They got all of those for free because those things are shared openly.

by janalsncm

2/23/2026 at 7:12:38 PM

I find this extremely concerning: "Countermeasures. We are developing Product, API and model-level safeguards designed to reduce the efficacy of model outputs for illicit distillation, without degrading the experience for legitimate customers."

I often ask Claude to reason out loud, and this indicates that instead of explicitly blocking flagged requests the model output will be purposefully degraded.

by k1musab1

2/23/2026 at 8:47:58 PM

Oh the hypocrisy.

by noravux