Dataless Black-Box Model Comparison


Citar

Texto integral

Acesso aberto Acesso aberto
Acesso é fechado Acesso está concedido
Acesso é fechado Somente assinantes

Resumo

In a time where the training of new machine learning models is extremely time-consuming and resource-intensive and the sale of these models or the access to them is more popular than ever, it is important to think about ways to ensure the protection of these models against theft. In this paper, we present a method for estimating the similarity or distance between two black-box models. Our approach does not depend on the knowledge about specific training data and therefore may be used to identify copies of or stolen machine learning models. It can also be applied to detect instances of license violations regarding the use of datasets. We validate our proposed method empirically on the CIFAR-10 and MNIST datasets using convolutional neural networks, generative adversarial networks and support vector machines. We show that it can clearly distinguish between models trained on different datasets. Theoretical foundations of our work are also given.

Sobre autores

C. Theiss

Computer Vision Group

Autor responsável pela correspondência
Email: christoph.theiss@uni-jena.de
Alemanha, Jena

C. Brust

Computer Vision Group

Email: christoph.theiss@uni-jena.de
Alemanha, Jena

J. Denzler

Computer Vision Group

Email: christoph.theiss@uni-jena.de
Alemanha, Jena

Arquivos suplementares

Arquivos suplementares
Ação
1. JATS XML

Declaração de direitos autorais © Pleiades Publishing, Ltd., 2018