alt.hn

2/12/2026 at 9:16:33 PM

Google identifies over 100k prompts used in distillation attacks

https://cloud.google.com/blog/topics/threat-intelligence/distillation-experimentation-integration-ai-adversarial-use

by carterpeterson

2/12/2026 at 9:26:09 PM

> Google DeepMind and GTIG have identified an increase in model extraction attempts or "distillation attacks," a method of intellectual property theft that violates Google's terms of service.

That’s rich considering the source of training data for these models.

Maybe that’s the outcome of the IP theft lawsuits currently in play. If you trained on stolen data, then anyone can distill your model.

I doubt it will play out that way though.

by bronco21016

2/13/2026 at 2:33:34 AM

"Distillation attack" feels like a loaded term for what is essentially the same kind of scraping these models are built on in the first place.

by RestartKernel

2/13/2026 at 2:54:56 PM

It is difficult to feel this is as important as the article suggests. Just another "shoe on the other foot" situation.

by p3p2o