2/12/2026 at 9:26:09 PM
> Google DeepMind and GTIG have identified an increase in model extraction attempts or "distillation attacks," a method of intellectual property theft that violates Google's terms of service.That’s rich considering the source of training data for these models.
Maybe that’s the outcome of the IP theft lawsuits currently in play. If you trained on stolen data, then anyone can distill your model.
I doubt it will play out that way though.
by bronco21016