Visible to the public Biblio

Found 789 results

Filters: Keyword is learning (artificial intelligence)  [Clear All Filters]
2020-08-17
Regol, Florence, Pal, Soumyasundar, Coates, Mark.  2019.  Node Copying for Protection Against Graph Neural Network Topology Attacks. 2019 IEEE 8th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP). :709–713.
Adversarial attacks can affect the performance of existing deep learning models. With the increased interest in graph based machine learning techniques, there have been investigations which suggest that these models are also vulnerable to attacks. In particular, corruptions of the graph topology can degrade the performance of graph based learning algorithms severely. This is due to the fact that the prediction capability of these algorithms relies mostly on the similarity structure imposed by the graph connectivity. Therefore, detecting the location of the corruption and correcting the induced errors becomes crucial. There has been some recent work which tackles the detection problem, however these methods do not address the effect of the attack on the downstream learning task. In this work, we propose an algorithm that uses node copying to mitigate the degradation in classification that is caused by adversarial attacks. The proposed methodology is applied only after the model for the downstream task is trained and the added computation cost scales well for large graphs. Experimental results show the effectiveness of our approach for several real world datasets.
Yao, Yepeng, Su, Liya, Lu, Zhigang, Liu, Baoxu.  2019.  STDeepGraph: Spatial-Temporal Deep Learning on Communication Graphs for Long-Term Network Attack Detection. 2019 18th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/13th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :120–127.
Network communication data are high-dimensional and spatiotemporal, and their information content is often degraded by common traffic analysis methods. For long-term network attack detection based on network flows, it is important to extract a discriminative, high-dimensional intrinsic representation of such flows. This work focuses on a hybrid deep neural network design using a combination of a convolutional neural network (CNN) and long short-term memory (LSTM) with graph similarity measures to learn high-dimensional representations from the network traffic. In particular, examining a set of network flows, we commence by constructing a temporal communication graph and then computing graph kernel matrices. Having obtained the kernel matrices, for each graph, we use the kernel value between graphs and calculate graph characterization vectors by graph signal processing. This vector can be regarded as a kernel-based similarity embedding vector of the graph that integrates structural similarity information and leverages efficient graph kernel using the graph Laplacian matrix. Our approach exploits graph structures as the additional prior information, the graph Laplacian matrix for feature extraction and hybrid deep learning models for long-term information learning on communication graphs. Experiments on two real-world network attack datasets show that our approach can extract more discriminative representations, leading to an improved accuracy in a supervised classification task. The experimental results show that our method increases the overall accuracy by approximately 10%-15%.
2020-08-13
Zola, Francesco, Eguimendia, Maria, Bruse, Jan Lukas, Orduna Urrutia, Raul.  2019.  Cascading Machine Learning to Attack Bitcoin Anonymity. 2019 IEEE International Conference on Blockchain (Blockchain). :10—17.

Bitcoin is a decentralized, pseudonymous cryptocurrency that is one of the most used digital assets to date. Its unregulated nature and inherent anonymity of users have led to a dramatic increase in its use for illicit activities. This calls for the development of novel methods capable of characterizing different entities in the Bitcoin network. In this paper, a method to attack Bitcoin anonymity is presented, leveraging a novel cascading machine learning approach that requires only a few features directly extracted from Bitcoin blockchain data. Cascading, used to enrich entities information with data from previous classifications, led to considerably improved multi-class classification performance with excellent values of Precision close to 1.0 for each considered class. Final models were implemented and compared using different machine learning models and showed significantly higher accuracy compared to their baseline implementation. Our approach can contribute to the development of effective tools for Bitcoin entity characterization, which may assist in uncovering illegal activities.

Shao, Sicong, Tunc, Cihan, Al-Shawi, Amany, Hariri, Salim.  2019.  One-Class Classification with Deep Autoencoder Neural Networks for Author Verification in Internet Relay Chat. 2019 IEEE/ACS 16th International Conference on Computer Systems and Applications (AICCSA). :1—8.
Social networks are highly preferred to express opinions, share information, and communicate with others on arbitrary topics. However, the downside is that many cybercriminals are leveraging social networks for cyber-crime. Internet Relay Chat (IRC) is the important social networks which can grant the anonymity to users by allowing them to connect channels without sign-up process. Therefore, IRC has been the playground of hackers and anonymous users for various operations such as hacking, cracking, and carding. Hence, it is urgent to study effective methods which can identify the authors behind the IRC messages. In this paper, we design an autonomic IRC monitoring system, performing recursive deep learning for classifying threat levels of messages and develop a novel author verification approach with one-class classification with deep autoencoder neural networks. The experimental results show that our approach can successfully perform effective author verification for IRC users.
Zhang, Yueqian, Kantarci, Burak.  2019.  Invited Paper: AI-Based Security Design of Mobile Crowdsensing Systems: Review, Challenges and Case Studies. 2019 IEEE International Conference on Service-Oriented System Engineering (SOSE). :17—1709.
Mobile crowdsensing (MCS) is a distributed sensing paradigm that uses a variety of built-in sensors in smart mobile devices to enable ubiquitous acquisition of sensory data from surroundings. However, non-dedicated nature of MCS results in vulnerabilities in the presence of malicious participants to compromise the availability of the MCS components, particularly the servers and participants' devices. In this paper, we focus on Denial of Service attacks in MCS where malicious participants submit illegitimate task requests to the MCS platform to keep MCS servers busy while having sensing devices expend energy needlessly. After reviewing Artificial Intelligence-based security solutions for MCS systems, we focus on a typical location-based and energy-oriented DoS attack, and present a security solution that applies ensemble techniques in machine learning to identify illegitimate tasks and prevent personal devices from pointless energy consumption so as to improve the availability of the whole system. Through simulations, we show that ensemble techniques are capable of identifying illegitimate and legitimate tasks while gradient boosting appears to be a preferable solution with an AUC performance higher than 0.88 in the precision-recall curve. We also investigate the impact of environmental settings on the detection performance so as to provide a clearer understanding of the model. Our performance results show that MCS task legitimacy decisions with high F-scores are possible for both illegitimate and legitimate tasks.
Sadeghi, Koosha, Banerjee, Ayan, Gupta, Sandeep K. S..  2019.  An Analytical Framework for Security-Tuning of Artificial Intelligence Applications Under Attack. 2019 IEEE International Conference On Artificial Intelligence Testing (AITest). :111—118.
Machine Learning (ML) algorithms, as the core technology in Artificial Intelligence (AI) applications, such as self-driving vehicles, make important decisions by performing a variety of data classification or prediction tasks. Attacks on data or algorithms in AI applications can lead to misclassification or misprediction, which can fail the applications. For each dataset separately, the parameters of ML algorithms should be tuned to reach a desirable classification or prediction accuracy. Typically, ML experts tune the parameters empirically, which can be time consuming and does not guarantee the optimal result. To this end, some research suggests an analytical approach to tune the ML parameters for maximum accuracy. However, none of the works consider the ML performance under attack in their tuning process. This paper proposes an analytical framework for tuning the ML parameters to be secure against attacks, while keeping its accuracy high. The framework finds the optimal set of parameters by defining a novel objective function, which takes into account the test results of both ML accuracy and its security against attacks. For validating the framework, an AI application is implemented to recognize whether a subject's eyes are open or closed, by applying k-Nearest Neighbors (kNN) algorithm on her Electroencephalogram (EEG) signals. In this application, the number of neighbors (k) and the distance metric type, as the two main parameters of kNN, are chosen for tuning. The input data perturbation attack, as one of the most common attacks on ML algorithms, is used for testing the security of the application. Exhaustive search approach is used to solve the optimization problem. The experiment results show k = 43 and cosine distance metric is the optimal configuration of kNN for the EEG dataset, which leads to 83.75% classification accuracy and reduces the attack success rate to 5.21%.
2020-08-10
Wasi, Sarwar, Shams, Sarmad, Nasim, Shahzad, Shafiq, Arham.  2019.  Intrusion Detection Using Deep Learning and Statistical Data Analysis. 2019 4th International Conference on Emerging Trends in Engineering, Sciences and Technology (ICEEST). :1–5.
Innovation and creativity have played an important role in the development of every field of life, relatively less but it has created several problems too. Intrusion detection is one of those problems which became difficult with the advancement in computer networks, multiple researchers with multiple techniques have come forward to solve this crucial issue, but network security is still a challenge. In our research, we have come across an idea to detect intrusion using a deep learning algorithm in combination with statistical data analysis of KDD cup 99 datasets. Firstly, we have applied statistical analysis on the given data set to generate a simplified form of data, so that a less complex binary classification model of artificial neural network could apply for data classification. Our system has decreased the complexity of the system and has improved the response time.
Kwon, Hyun, Yoon, Hyunsoo, Park, Ki-Woong.  2019.  Selective Poisoning Attack on Deep Neural Network to Induce Fine-Grained Recognition Error. 2019 IEEE Second International Conference on Artificial Intelligence and Knowledge Engineering (AIKE). :136–139.

Deep neural networks (DNNs) provide good performance for image recognition, speech recognition, and pattern recognition. However, a poisoning attack is a serious threat to DNN's security. The poisoning attack is a method to reduce the accuracy of DNN by adding malicious training data during DNN training process. In some situations such as a military, it may be necessary to drop only a chosen class of accuracy in the model. For example, if an attacker does not allow only nuclear facilities to be selectively recognized, it may be necessary to intentionally prevent UAV from correctly recognizing nuclear-related facilities. In this paper, we propose a selective poisoning attack that reduces the accuracy of only chosen class in the model. The proposed method reduces the accuracy of a chosen class in the model by training malicious training data corresponding to a chosen class, while maintaining the accuracy of the remaining classes. For experiment, we used tensorflow as a machine learning library and MNIST and CIFAR10 as datasets. Experimental results show that the proposed method can reduce the accuracy of the chosen class to 43.2% and 55.3% in MNIST and CIFAR10, while maintaining the accuracy of the remaining classes.

Hajdu, Gergo, Minoso, Yaclaudes, Lopez, Rafael, Acosta, Miguel, Elleithy, Abdelrahman.  2019.  Use of Artificial Neural Networks to Identify Fake Profiles. 2019 IEEE Long Island Systems, Applications and Technology Conference (LISAT). :1–4.
In this paper, we use machine learning, namely an artificial neural network to determine what are the chances that Facebook friend request is authentic or not. We also outline the classes and libraries involved. Furthermore, we discuss the sigmoid function and how the weights are determined and used. Finally, we consider the parameters of the social network page which are utmost important in the provided solution.
2020-08-07
De Abreu, Sergio.  2019.  A Feasibility Study on Machine Learning Techniques for APT Detection and Protection in VANETs. 2019 IEEE 12th International Conference on Global Security, Safety and Sustainability (ICGS3). :212—212.
It is estimated that by 2030, 1 in 4 vehicles on the road will be driverless with adoption rates increasing this figure substantially over the next few decades.
Hasan, Kamrul, Shetty, Sachin, Ullah, Sharif.  2019.  Artificial Intelligence Empowered Cyber Threat Detection and Protection for Power Utilities. 2019 IEEE 5th International Conference on Collaboration and Internet Computing (CIC). :354—359.
Cyber threats have increased extensively during the last decade, especially in smart grids. Cybercriminals have become more sophisticated. Current security controls are not enough to defend networks from the number of highly skilled cybercriminals. Cybercriminals have learned how to evade the most sophisticated tools, such as Intrusion Detection and Prevention Systems (IDPS), and Advanced Persistent Threat (APT) is almost invisible to current tools. Fortunately, the application of Artificial Intelligence (AI) may increase the detection rate of IDPS systems, and Machine Learning (ML) techniques can mine data to detect different attack stages of APT. However, the implementation of AI may bring other risks, and cybersecurity experts need to find a balance between risk and benefits.
Torkzadehmahani, Reihaneh, Kairouz, Peter, Paten, Benedict.  2019.  DP-CGAN: Differentially Private Synthetic Data and Label Generation. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). :98—104.
Generative Adversarial Networks (GANs) are one of the well-known models to generate synthetic data including images, especially for research communities that cannot use original sensitive datasets because they are not publicly accessible. One of the main challenges in this area is to preserve the privacy of individuals who participate in the training of the GAN models. To address this challenge, we introduce a Differentially Private Conditional GAN (DP-CGAN) training framework based on a new clipping and perturbation strategy, which improves the performance of the model while preserving privacy of the training dataset. DP-CGAN generates both synthetic data and corresponding labels and leverages the recently introduced Renyi differential privacy accountant to track the spent privacy budget. The experimental results show that DP-CGAN can generate visually and empirically promising results on the MNIST dataset with a single-digit epsilon parameter in differential privacy.
Ramezanian, Sara, Niemi, Valtteri.  2019.  Privacy Preserving Cyberbullying Prevention with AI Methods in 5G Networks. 2019 25th Conference of Open Innovations Association (FRUCT). :265—271.
Children and teenagers that have been a victim of bullying can possibly suffer its psychological effects for a lifetime. With the increase of online social media, cyberbullying incidents have been increased as well. In this paper we discuss how we can detect cyberbullying with AI techniques, using term frequency-inverse document frequency. We label messages as benign or bully. We want our method of cyberbullying detection to be privacy-preserving, such that the subscribers' benign messages should not be revealed to the operator. Moreover, the operator labels subscribers as normal, bully and victim. The operator utilizes policy control in 5G networks, to protect victims of cyberbullying from harmful traffic.
Chen, Huili, Cammarota, Rosario, Valencia, Felipe, Regazzoni, Francesco.  2019.  PlaidML-HE: Acceleration of Deep Learning Kernels to Compute on Encrypted Data. 2019 IEEE 37th International Conference on Computer Design (ICCD). :333—336.

Machine Learning as a Service (MLaaS) is becoming a popular practice where Service Consumers, e.g., end-users, send their data to a ML Service and receive the prediction outputs. However, the emerging usage of MLaaS has raised severe privacy concerns about users' proprietary data. PrivacyPreserving Machine Learning (PPML) techniques aim to incorporate cryptographic primitives such as Homomorphic Encryption (HE) and Multi-Party Computation (MPC) into ML services to address privacy concerns from a technology standpoint. Existing PPML solutions have not been widely adopted in practice due to their assumed high overhead and integration difficulty within various ML front-end frameworks as well as hardware backends. In this work, we propose PlaidML-HE, the first end-toend HE compiler for PPML inference. Leveraging the capability of Domain-Specific Languages, PlaidML-HE enables automated generation of HE kernels across diverse types of devices. We evaluate the performance of PlaidML-HE on different ML kernels and demonstrate that PlaidML-HE greatly reduces the overhead of the HE primitive compared to the existing implementations.

Zhu, Tianqing, Yu, Philip S..  2019.  Applying Differential Privacy Mechanism in Artificial Intelligence. 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS). :1601—1609.
Artificial Intelligence (AI) has attracted a large amount of attention in recent years. However, several new problems, such as privacy violations, security issues, or effectiveness, have been emerging. Differential privacy has several attractive properties that make it quite valuable for AI, such as privacy preservation, security, randomization, composition, and stability. Therefore, this paper presents differential privacy mechanisms for multi-agent systems, reinforcement learning, and knowledge transfer based on those properties, which proves that current AI can benefit from differential privacy mechanisms. In addition, the previous usage of differential privacy mechanisms in private machine learning, distributed machine learning, and fairness in models is discussed, bringing several possible avenues to use differential privacy mechanisms in AI. The purpose of this paper is to deliver the initial idea of how to integrate AI with differential privacy mechanisms and to explore more possibilities to improve AIs performance.
Lou, Xin, Tran, Cuong, Yau, David K.Y., Tan, Rui, Ng, Hongwei, Fu, Tom Zhengjia, Winslett, Marianne.  2019.  Learning-Based Time Delay Attack Characterization for Cyber-Physical Systems. 2019 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids (SmartGridComm). :1—6.
The cyber-physical systems (CPSes) rely on computing and control techniques to achieve system safety and reliability. However, recent attacks show that these techniques are vulnerable once the cyber-attackers have bypassed air gaps. The attacks may cause service disruptions or even physical damages. This paper designs the built-in attack characterization scheme for one general type of cyber-attacks in CPS, which we call time delay attack, that delays the transmission of the system control commands. We use the recurrent neural networks in deep learning to estimate the delay values from the input trace. Specifically, to deal with the long time-sequence data, we design the deep learning model using stacked bidirectional long short-term memory (LSTM) units. The proposed approach is tested by using the data generated from a power plant control system. The results show that the LSTM-based deep learning approach can work well based on data traces from three sensor measurements, i.e., temperature, pressure, and power generation, in the power plant control system. Moreover, we show that the proposed approach outperforms the base approach based on k-nearest neighbors.
2020-08-03
Juuti, Mika, Szyller, Sebastian, Marchal, Samuel, Asokan, N..  2019.  PRADA: Protecting Against DNN Model Stealing Attacks. 2019 IEEE European Symposium on Security and Privacy (EuroS P). :512–527.
Machine learning (ML) applications are increasingly prevalent. Protecting the confidentiality of ML models becomes paramount for two reasons: (a) a model can be a business advantage to its owner, and (b) an adversary may use a stolen model to find transferable adversarial examples that can evade classification by the original model. Access to the model can be restricted to be only via well-defined prediction APIs. Nevertheless, prediction APIs still provide enough information to allow an adversary to mount model extraction attacks by sending repeated queries via the prediction API. In this paper, we describe new model extraction attacks using novel approaches for generating synthetic queries, and optimizing training hyperparameters. Our attacks outperform state-of-the-art model extraction in terms of transferability of both targeted and non-targeted adversarial examples (up to +29-44 percentage points, pp), and prediction accuracy (up to +46 pp) on two datasets. We provide take-aways on how to perform effective model extraction attacks. We then propose PRADA, the first step towards generic and effective detection of DNN model extraction attacks. It analyzes the distribution of consecutive API queries and raises an alarm when this distribution deviates from benign behavior. We show that PRADA can detect all prior model extraction attacks with no false positives.
Nakayama, Kiyoshi, Muralidhar, Nikhil, Jin, Chenrui, Sharma, Ratnesh.  2019.  Detection of False Data Injection Attacks in Cyber-Physical Systems using Dynamic Invariants. 2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA). :1023–1030.

Modern cyber-physical systems are increasingly complex and vulnerable to attacks like false data injection aimed at destabilizing and confusing the systems. We develop and evaluate an attack-detection framework aimed at learning a dynamic invariant network, data-driven temporal causal relationships between components of cyber-physical systems. We evaluate the relative performance in attack detection of the proposed model relative to traditional anomaly detection approaches. In this paper, we introduce Granger Causality based Kalman Filter with Adaptive Robust Thresholding (G-KART) as a framework for anomaly detection based on data-driven functional relationships between components in cyber-physical systems. In particular, we select power systems as a critical infrastructure with complex cyber-physical systems whose protection is an essential facet of national security. The system presented is capable of learning with or without network topology the task of detection of false data injection attacks in power systems. Kalman filters are used to learn and update the dynamic state of each component in the power system and in-turn monitor the component for malicious activity. The ego network for each node in the invariant graph is treated as an ensemble model of Kalman filters, each of which captures a subset of the node's interactions with other parts of the network. We finally also introduce an alerting mechanism to surface alerts about compromised nodes.

Prasad, Mahendra, Tripathi, Sachin, Dahal, Keshav.  2019.  Wormhole attack detection in ad hoc network using machine learning technique. 2019 10th International Conference on Computing, Communication and Networking Technologies (ICCCNT). :1–7.

In this paper, we explore the use of machine learning technique for wormhole attack detection in ad hoc network. This work has categorized into three major tasks. One of our tasks is a simulation of wormhole attack in an ad hoc network environment with multiple wormhole tunnels. A next task is the characterization of packet attributes that lead to feature selection. Consequently, we perform data generation and data collection operation that provide large volume dataset. The final task is applied to machine learning technique for wormhole attack detection. Prior to this, a wormhole attack has detected using traditional approaches. In those, a Multirate-DelPHI is shown best results as detection rate is 90%, and the false alarm rate is 20%. We conduct experiments and illustrate that our method performs better resulting in all statistical parameters such as detection rate is 93.12% and false alarm rate is 5.3%. Furthermore, we have also shown results on various statistical parameters such as Precision, F-measure, MCC, and Accuracy.

Al-Emadi, Sara, Al-Ali, Abdulla, Mohammad, Amr, Al-Ali, Abdulaziz.  2019.  Audio Based Drone Detection and Identification using Deep Learning. 2019 15th International Wireless Communications Mobile Computing Conference (IWCMC). :459–464.
In recent years, unmanned aerial vehicles (UAVs) have become increasingly accessible to the public due to their high availability with affordable prices while being equipped with better technology. However, this raises a great concern from both the cyber and physical security perspectives since UAVs can be utilized for malicious activities in order to exploit vulnerabilities by spying on private properties, critical areas or to carry dangerous objects such as explosives which makes them a great threat to the society. Drone identification is considered the first step in a multi-procedural process in securing physical infrastructure against this threat. In this paper, we present drone detection and identification methods using deep learning techniques such as Convolutional Neural Network (CNN), Recurrent Neural Network (RNN) and Convolutional Recurrent Neural Network (CRNN). These algorithms will be utilized to exploit the unique acoustic fingerprints of the flying drones in order to detect and identify them. We propose a comparison between the performance of different neural networks based on our dataset which features audio recorded samples of drone activities. The major contribution of our work is to validate the usage of these methodologies of drone detection and identification in real life scenarios and to provide a robust comparison of the performance between different deep neural network algorithms for this application. In addition, we are releasing the dataset of drone audio clips for the research community for further analysis.
2020-07-30
He, Yongzhong, Zhao, Xiaojuan, Wang, Chao.  2019.  Privacy Mining of Large-scale Mobile Usage Data. 2019 IEEE International Conference on Power, Intelligent Computing and Systems (ICPICS). :81—86.
While enjoying the convenience brought by mobile phones, users have been exposed to high risk of private information leakage. It is known that many applications on mobile devices read private data and send them to remote servers. However how, when and in what scale the private data are leaked are not investigated systematically in the real-world scenario. In this paper, a framework is proposed to analyze the usage data from mobile devices and the traffic data from the mobile network and make a comprehensive privacy leakage detection and privacy inference mining on a large scale of realworld mobile data. Firstly, this paper sets up a training dataset and trains a privacy detection model on mobile traffic data. Then classical machine learning tools are used to discover private usage patterns. Based on our experiments and data analysis, it is found that i) a large number of private information is transmitted in plaintext, and even passwords are transmitted in plaintext by some applications, ii) more privacy types are leaked in Android than iOS, while GPS location is the most leaked privacy in both Android and iOS system, iii) the usage pattern is related to mobile device price. Through our experiments and analysis, it can be concluded that mobile privacy leakage is pervasive and serious.
Shey, James, Karimi, Naghmeh, Robucci, Ryan, Patel, Chintan.  2018.  Design-Based Fingerprinting Using Side-Channel Power Analysis for Protection Against IC Piracy. 2018 IEEE Computer Society Annual Symposium on VLSI (ISVLSI). :614—619.

Intellectual property (IP) and integrated circuit (IC) piracy are of increasing concern to IP/IC providers because of the globalization of IC design flow and supply chains. Such globalization is driven by the cost associated with the design, fabrication, and testing of integrated circuits and allows avenues for piracy. To protect the designs against IC piracy, we propose a fingerprinting scheme based on side-channel power analysis and machine learning methods. The proposed method distinguishes the ICs which realize a modified netlist, yet same functionality. Our method doesn't imply any hardware overhead. We specifically focus on the ability to detect minimal design variations, as quantified by the number of logic gates changed. Accuracy of the proposed scheme is greater than 96 percent, and typically 99 percent in detecting one or more gate-level netlist changes. Additionally, the effect of temperature has been investigated as part of this work. Results depict 95.4 percent accuracy in detecting the exact number of gate changes when data and classifier use the same temperature, while training with different temperatures results in 33.6 percent accuracy. This shows the effectiveness of building temperature-dependent classifiers from simulations at known operating temperatures.

Cammarota, Rosario, Banerjee, Indranil, Rosenberg, Ofer.  2018.  Machine Learning IP Protection. 2018 IEEE/ACM International Conference on Computer-Aided Design (ICCAD). :1—3.

Machine learning, specifically deep learning is becoming a key technology component in application domains such as identity management, finance, automotive, and healthcare, to name a few. Proprietary machine learning models - Machine Learning IP - are developed and deployed at the network edge, end devices and in the cloud, to maximize user experience. With the proliferation of applications embedding Machine Learning IPs, machine learning models and hyper-parameters become attractive to attackers, and require protection. Major players in the semiconductor industry provide mechanisms on device to protect the IP at rest and during execution from being copied, altered, reverse engineered, and abused by attackers. In this work we explore system security architecture mechanisms and their applications to Machine Learning IP protection.

Wang, Tianhao, Kerschbaum, Florian.  2019.  Attacks on Digital Watermarks for Deep Neural Networks. ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :2622—2626.
Training deep neural networks is a computationally expensive task. Furthermore, models are often derived from proprietary datasets that have been carefully prepared and labelled. Hence, creators of deep learning models want to protect their models against intellectual property theft. However, this is not always possible, since the model may, e.g., be embedded in a mobile app for fast response times. As a countermeasure watermarks for deep neural networks have been developed that embed secret information into the model. This information can later be retrieved by the creator to prove ownership. Uchida et al. proposed the first such watermarking method. The advantage of their scheme is that it does not compromise the accuracy of the model prediction. However, in this paper we show that their technique modifies the statistical distribution of the model. Using this modification we can not only detect the presence of a watermark, but even derive its embedding length and use this information to remove the watermark by overwriting it. We show analytically that our detection algorithm follows consequentially from their embedding algorithm and propose a possible countermeasure. Our findings shall help to refine the definition of undetectability of watermarks for deep neural networks.
Deeba, Farah, Tefera, Getenet, Kun, She, Memon, Hira.  2019.  Protecting the Intellectual Properties of Digital Watermark Using Deep Neural Network. 2019 4th International Conference on Information Systems Engineering (ICISE). :91—95.

Recently in the vast advancement of Artificial Intelligence, Machine learning and Deep Neural Network (DNN) driven us to the robust applications. Such as Image processing, speech recognition, and natural language processing, DNN Algorithms has succeeded in many drawbacks; especially the trained DNN models have made easy to the researchers to produces state-of-art results. However, sharing these trained models are always a challenging task, i.e. security, and protection. We performed extensive experiments to present some analysis of watermark in DNN. We proposed a DNN model for Digital watermarking which investigate the intellectual property of Deep Neural Network, Embedding watermarks, and owner verification. This model can generate the watermarks to deal with possible attacks (fine tuning and train to embed). This approach is tested on the standard dataset. Hence this model is robust to above counter-watermark attacks. Our model accurately and instantly verifies the ownership of all the remotely expanded deep learning models without affecting the model accuracy for standard information data.