Google identifies over 100k prompts used in distillation attacks

(cloud.google.com)

7 points | by carterpeterson 17 hours ago ago

2 comments

  • bronco21016 17 hours ago ago

    > Google DeepMind and GTIG have identified an increase in model extraction attempts or "distillation attacks," a method of intellectual property theft that violates Google's terms of service.

    That’s rich considering the source of training data for these models.

    Maybe that’s the outcome of the IP theft lawsuits currently in play. If you trained on stolen data, then anyone can distill your model.

    I doubt it will play out that way though.

  • RestartKernel 12 hours ago ago

    "Distillation attack" feels like a loaded term for what is essentially the same kind of scraping these models are built on in the first place.