Abstract
Trained machine learning models are core components of proprietary products. Such products are either delivered as a software package (containing the trained model) or they are deployed on cloud with restricted API access. In this ML-as-a-service method, users are charged on a per-query or per-hour basis hence generating the revenue for model owners. Models deployed on cloud can be vulnerable to model duplication attacks. There is a way to exploit these services and clone the functionalities of the black-box models hidden in the cloud by making continuous requests to the APIs. In the worst-case scenario, the attackers can sell the cloned model or use them in their business model.
The speakers will present a modification to the traditional approach commonly used by attackers to train their models. Their research explains the serious need to rewrite the current countermeasures for MLaaS, an obligatory and interesting area for future work. video removed