Biblio

Filters: Author is Liu, Weiqiang  [Clear All Filters]
2021-08-11
Xue, Mingfu, Wu, Zhiyu, He, Can, Wang, Jian, Liu, Weiqiang.  2020.  Active DNN IP Protection: A Novel User Fingerprint Management and DNN Authorization Control Technique. 2020 IEEE 19th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom). :975—982.
The training process of deep learning model is costly. As such, deep learning model can be treated as an intellectual property (IP) of the model creator. However, a pirate can illegally copy, redistribute or abuse the model without permission. In recent years, a few Deep Neural Networks (DNN) IP protection works have been proposed. However, most of existing works passively verify the copyright of the model after the piracy occurs, and lack of user identity management, thus cannot provide commercial copyright management functions. In this paper, a novel user fingerprint management and DNN authorization control technique based on backdoor is proposed to provide active DNN IP protection. The proposed method can not only verify the ownership of the model, but can also authenticate and manage the user's unique identity, so as to provide a commercially applicable DNN IP management mechanism. Experimental results on CIFAR-10, CIFAR-100 and Fashion-MNIST datasets show that the proposed method can achieve high detection rate for user authentication (up to 100% in the three datasets). Illegal users with forged fingerprints cannot pass authentication as the detection rates are all 0 % in the three datasets. Model owner can verify his ownership since he can trigger the backdoor with a high confidence. In addition, the accuracy drops are only 0.52%, 1.61 % and -0.65% on CIFAR-10, CIFAR-100 and Fashion-MNIST, respectively, which indicate that the proposed method will not affect the performance of the DNN models. The proposed method is also robust to model fine-tuning and pruning attacks. The detection rates for owner verification on CIFAR-10, CIFAR-100 and Fashion-MNIST are all 100% after model pruning attack, and are 90 %, 83 % and 93 % respectively after model fine-tuning attack, on the premise that the attacker wants to preserve the accuracy of the model.
2020-06-19
Gu, Chongyan, Chang, Chip Hong, Liu, Weiqiang, Yu, Shichao, Ma, Qingqing, O'Neill, Maire.  2019.  A Modeling Attack Resistant Deception Technique for Securing PUF based Authentication. 2019 Asian Hardware Oriented Security and Trust Symposium (AsianHOST). :1—6.

Due to practical constraints in preventing phishing through public network or insecure communication channels, simple physical unclonable function (PDF)-based authentication protocol with unrestricted queries and transparent responses is vulnerable to modeling and replay attacks. In this paper, we present a PUF-based authentication method to mitigate the practical limitations in applications where a resource-rich server authenticates a device with no strong restriction imposed on the type of PUF designs or any additional protection on the binary channel used for the authentication. Our scheme uses an active deception protocol to prevent machine learning (ML) attacks on a device. The monolithic system makes collection of challenge response pairs (CRPs) easy for model building during enrollment but prohibitively time consuming upon device deployment. A genuine server can perform a mutual authentication with the device at any time with a combined fresh challenge contributed by both the server and the device. The message exchanged in clear does not expose the authentic CRPs. The false PUF multiplexing is fortified against prediction of waiting time by doubling the time penalty for every unsuccessful authentication.