Biblio
Filters: Author is He, Can [Clear All Filters]
Active DNN IP Protection: A Novel User Fingerprint Management and DNN Authorization Control Technique. 2020 IEEE 19th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom). :975—982.
.
2020. The training process of deep learning model is costly. As such, deep learning model can be treated as an intellectual property (IP) of the model creator. However, a pirate can illegally copy, redistribute or abuse the model without permission. In recent years, a few Deep Neural Networks (DNN) IP protection works have been proposed. However, most of existing works passively verify the copyright of the model after the piracy occurs, and lack of user identity management, thus cannot provide commercial copyright management functions. In this paper, a novel user fingerprint management and DNN authorization control technique based on backdoor is proposed to provide active DNN IP protection. The proposed method can not only verify the ownership of the model, but can also authenticate and manage the user's unique identity, so as to provide a commercially applicable DNN IP management mechanism. Experimental results on CIFAR-10, CIFAR-100 and Fashion-MNIST datasets show that the proposed method can achieve high detection rate for user authentication (up to 100% in the three datasets). Illegal users with forged fingerprints cannot pass authentication as the detection rates are all 0 % in the three datasets. Model owner can verify his ownership since he can trigger the backdoor with a high confidence. In addition, the accuracy drops are only 0.52%, 1.61 % and -0.65% on CIFAR-10, CIFAR-100 and Fashion-MNIST, respectively, which indicate that the proposed method will not affect the performance of the DNN models. The proposed method is also robust to model fine-tuning and pruning attacks. The detection rates for owner verification on CIFAR-10, CIFAR-100 and Fashion-MNIST are all 100% after model pruning attack, and are 90 %, 83 % and 93 % respectively after model fine-tuning attack, on the premise that the attacker wants to preserve the accuracy of the model.