Visible to the public Biblio

Found 236 results

Filters: Keyword is Robustness  [Clear All Filters]
2021-01-20
Zarazaga, P. P., Bäckström, T., Sigg, S..  2020.  Acoustic Fingerprints for Access Management in Ad-Hoc Sensor Networks. IEEE Access. 8:166083—166094.

Voice user interfaces can offer intuitive interaction with our devices, but the usability and audio quality could be further improved if multiple devices could collaborate to provide a distributed voice user interface. To ensure that users' voices are not shared with unauthorized devices, it is however necessary to design an access management system that adapts to the users' needs. Prior work has demonstrated that a combination of audio fingerprinting and fuzzy cryptography yields a robust pairing of devices without sharing the information that they record. However, the robustness of these systems is partially based on the extensive duration of the recordings that are required to obtain the fingerprint. This paper analyzes methods for robust generation of acoustic fingerprints in short periods of time to enable the responsive pairing of devices according to changes in the acoustic scenery and can be integrated into other typical speech processing tools.

2021-01-15
Akhtar, Z., Dasgupta, D..  2019.  A Comparative Evaluation of Local Feature Descriptors for DeepFakes Detection. 2019 IEEE International Symposium on Technologies for Homeland Security (HST). :1—5.
The global proliferation of affordable photographing devices and readily-available face image and video editing software has caused a remarkable rise in face manipulations, e.g., altering face skin color using FaceApp. Such synthetic manipulations are becoming a very perilous problem, as altered faces not only can fool human experts but also have detrimental consequences on automated face identification systems (AFIS). Thus, it is vital to formulate techniques to improve the robustness of AFIS against digital face manipulations. The most prominent countermeasure is face manipulation detection, which aims at discriminating genuine samples from manipulated ones. Over the years, analysis of microtextural features using local image descriptors has been successfully used in various applications owing to their flexibility, computational simplicity, and performances. Therefore, in this paper, we study the possibility of identifying manipulated faces via local feature descriptors. The comparative experimental investigation of ten local feature descriptors on a new and publicly available DeepfakeTIMIT database is reported.
2021-01-11
Khadka, A., Argyriou, V., Remagnino, P..  2020.  Accurate Deep Net Crowd Counting for Smart IoT Video acquisition devices. 2020 16th International Conference on Distributed Computing in Sensor Systems (DCOSS). :260—264.

A novel deep neural network is proposed, for accurate and robust crowd counting. Crowd counting is a complex task, as it strongly depends on the deployed camera characteristics and, above all, the scene perspective. Crowd counting is essential in security applications where Internet of Things (IoT) cameras are deployed to help with crowd management tasks. The complexity of a scene varies greatly, and a medium to large scale security system based on IoT cameras must cater for changes in perspective and how people appear from different vantage points. To address this, our deep architecture extracts multi-scale features with a pyramid contextual module to provide long-range contextual information and enlarge the receptive field. Experiments were run on three major crowd counting datasets, to test our proposed method. Results demonstrate our method supersedes the performance of state-of-the-art methods.

2020-12-28
Raju, R. S., Lipasti, M..  2020.  BlurNet: Defense by Filtering the Feature Maps. 2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W). :38—46.

Recently, the field of adversarial machine learning has been garnering attention by showing that state-of-the-art deep neural networks are vulnerable to adversarial examples, stemming from small perturbations being added to the input image. Adversarial examples are generated by a malicious adversary by obtaining access to the model parameters, such as gradient information, to alter the input or by attacking a substitute model and transferring those malicious examples over to attack the victim model. Specifically, one of these attack algorithms, Robust Physical Perturbations (RP2), generates adversarial images of stop signs with black and white stickers to achieve high targeted misclassification rates against standard-architecture traffic sign classifiers. In this paper, we propose BlurNet, a defense against the RP2 attack. First, we motivate the defense with a frequency analysis of the first layer feature maps of the network on the LISA dataset, which shows that high frequency noise is introduced into the input image by the RP2 algorithm. To remove the high frequency noise, we introduce a depthwise convolution layer of standard blur kernels after the first layer. We perform a blackbox transfer attack to show that low-pass filtering the feature maps is more beneficial than filtering the input. We then present various regularization schemes to incorporate this lowpass filtering behavior into the training regime of the network and perform white-box attacks. We conclude with an adaptive attack evaluation to show that the success rate of the attack drops from 90% to 20% with total variation regularization, one of the proposed defenses.

2020-12-11
Huang, Y., Wang, Y..  2019.  Multi-format speech perception hashing based on time-frequency parameter fusion of energy zero ratio and frequency band variance. 2019 3rd International Conference on Electronic Information Technology and Computer Engineering (EITCE). :243—251.

In order to solve the problems of the existing speech content authentication algorithm, such as single format, ununiversal algorithm, low security, low accuracy of tamper detection and location in small-scale, a multi-format speech perception hashing based on time-frequency parameter fusion of energy zero ratio and frequency band bariance is proposed. Firstly, the algorithm preprocesses the processed speech signal and calculates the short-time logarithmic energy, zero-crossing rate and frequency band variance of each speech fragment. Then calculate the energy to zero ratio of each frame, perform time- frequency parameter fusion on time-frequency features by mean filtering, and the time-frequency parameters are constructed by difference hashing method. Finally, the hash sequence is scrambled with equal length by logistic chaotic map, so as to improve the security of the hash sequence in the transmission process. Experiments show that the proposed algorithm is robustness, discrimination and key dependent.

Wu, Y., Li, X., Zou, D., Yang, W., Zhang, X., Jin, H..  2019.  MalScan: Fast Market-Wide Mobile Malware Scanning by Social-Network Centrality Analysis. 2019 34th IEEE/ACM International Conference on Automated Software Engineering (ASE). :139—150.

Malware scanning of an app market is expected to be scalable and effective. However, existing approaches use either syntax-based features which can be evaded by transformation attacks or semantic-based features which are usually extracted by performing expensive program analysis. Therefor, in this paper, we propose a lightweight graph-based approach to perform Android malware detection. Instead of traditional heavyweight static analysis, we treat function call graphs of apps as social networks and perform social-network-based centrality analysis to represent the semantic features of the graphs. Our key insight is that centrality provides a succinct and fault-tolerant representation of graph semantics, especially for graphs with certain amount of inaccurate information (e.g., inaccurate call graphs). We implement a prototype system, MalScan, and evaluate it on datasets of 15,285 benign samples and 15,430 malicious samples. Experimental results show that MalScan is capable of detecting Android malware with up to 98% accuracy under one second which is more than 100 times faster than two state-of-the-art approaches, namely MaMaDroid and Drebin. We also demonstrate the feasibility of MalScan on market-wide malware scanning by performing a statistical study on over 3 million apps. Finally, in a corpus of dataset collected from Google-Play app market, MalScan is able to identify 18 zero-day malware including malware samples that can evade detection of existing tools.

2020-12-02
Mukaidani, H., Saravanakumar, R., Xu, H., Zhuang, W..  2019.  Robust Nash Static Output Feedback Strategy for Uncertain Markov Jump Delay Stochastic Systems. 2019 IEEE 58th Conference on Decision and Control (CDC). :5826—5831.

In this paper, we propose a robust Nash strategy for a class of uncertain Markov jump delay stochastic systems (UMJDSSs) via static output feedback (SOF). After establishing the extended bounded real lemma for UMJDSS, the conditions for the existence of a robust Nash strategy set are determined by means of cross coupled stochastic matrix inequalities (CCSMIs). In order to solve the SOF problem, an heuristic algorithm is developed based on the algebraic equations and the linear matrix inequalities (LMIs). In particular, it is shown that robust convergence is guaranteed under a new convergence condition. Finally, a practical numerical example based on the congestion control for active queue management is provided to demonstrate the reliability and usefulness of the proposed design scheme.

Ayar, T., Budzisz, Ł, Rathke, B..  2018.  A Transparent Reordering Robust TCP Proxy To Allow Per-Packet Load Balancing in Core Networks. 2018 9th International Conference on the Network of the Future (NOF). :1—8.

The idea to use multiple paths to transport TCP traffic seems very attractive due to its potential benefits it may offer for both redundancy and better utilization of available resources by load balancing. Fixed and mobile network providers employ frequently load-balancers that use multiple paths on either per-flow or per-destination level, but very seldom on per-packet level. Despite of the benefits of packet-level load balancing mechanisms (e.g., low computational complexity and high bandwidth utilization) network providers can't use them mainly because of TCP packet reorderings that harm TCP performance. Emerging network architectures also support multiple paths, but they face with the same obstacle in balancing their load to multiple paths. Indeed, packet level load balancing research is paralyzed by the reordering vulnerability of TCP.A couple of TCP variants exist that deal with TCP packet reordering problem, but due to lack of end-to-end transparency they were not widely deployed and adopted. In this paper, we revisit TCP's packet reorderings problem and present a transparent and light-weight algorithm, Out-of-Order Robustness for TCP with Transparent Acknowledgment (ACK) Intervention (ORTA), to deal with out-of-order deliveries.ORTA works as a transparent thin layer below TCP and hides harmful side-effects of packet-level load balancing. ORTA monitors all TCP flow packets and uses ACK traffic shaping, without any modifications to either TCP sender or receiver sides. Since it is transparent to TCP end-points, it can be easily deployed on TCP sender end-hosts (EHs), gateway (GW) routers, or access points (APs). ORTA opens a door for network providers to use per-packet load balancing.The proposed ORTA algorithm is implemented and tested in NS-2. The results show that ORTA can prevent TCP performance decrease when per-packet load balancing is used.

2020-12-01
Wang, S., Mei, Y., Park, J., Zhang, M..  2019.  A Two-Stage Genetic Programming Hyper-Heuristic for Uncertain Capacitated Arc Routing Problem. 2019 IEEE Symposium Series on Computational Intelligence (SSCI). :1606—1613.

Genetic Programming Hyper-heuristic (GPHH) has been successfully applied to automatically evolve effective routing policies to solve the complex Uncertain Capacitated Arc Routing Problem (UCARP). However, GPHH typically ignores the interpretability of the evolved routing policies. As a result, GP-evolved routing policies are often very complex and hard to be understood and trusted by human users. In this paper, we aim to improve the interpretability of the GP-evolved routing policies. To this end, we propose a new Multi-Objective GP (MOGP) to optimise the performance and size simultaneously. A major issue here is that the size is much easier to be optimised than the performance, and the search tends to be biased to the small but poor routing policies. To address this issue, we propose a simple yet effective Two-Stage GPHH (TS-GPHH). In the first stage, only the performance is to be optimised. Then, in the second stage, both objectives are considered (using our new MOGP). The experimental results showed that TS-GPHH could obtain much smaller and more interpretable routing policies than the state-of-the-art single-objective GPHH, without deteriorating the performance. Compared with traditional MOGP, TS-GPHH can obtain a much better and more widespread Pareto front.

Usama, M., Asim, M., Latif, S., Qadir, J., Ala-Al-Fuqaha.  2019.  Generative Adversarial Networks For Launching and Thwarting Adversarial Attacks on Network Intrusion Detection Systems. 2019 15th International Wireless Communications Mobile Computing Conference (IWCMC). :78—83.

Intrusion detection systems (IDSs) are an essential cog of the network security suite that can defend the network from malicious intrusions and anomalous traffic. Many machine learning (ML)-based IDSs have been proposed in the literature for the detection of malicious network traffic. However, recent works have shown that ML models are vulnerable to adversarial perturbations through which an adversary can cause IDSs to malfunction by introducing a small impracticable perturbation in the network traffic. In this paper, we propose an adversarial ML attack using generative adversarial networks (GANs) that can successfully evade an ML-based IDS. We also show that GANs can be used to inoculate the IDS and make it more robust to adversarial perturbations.

2020-10-29
Vi, Bao Ngoc, Noi Nguyen, Huu, Nguyen, Ngoc Tran, Truong Tran, Cao.  2019.  Adversarial Examples Against Image-based Malware Classification Systems. 2019 11th International Conference on Knowledge and Systems Engineering (KSE). :1—5.

Malicious software, known as malware, has become urgently serious threat for computer security, so automatic mal-ware classification techniques have received increasing attention. In recent years, deep learning (DL) techniques for computer vision have been successfully applied for malware classification by visualizing malware files and then using DL to classify visualized images. Although DL-based classification systems have been proven to be much more accurate than conventional ones, these systems have been shown to be vulnerable to adversarial attacks. However, there has been little research to consider the danger of adversarial attacks to visualized image-based malware classification systems. This paper proposes an adversarial attack method based on the gradient to attack image-based malware classification systems by introducing perturbations on resource section of PE files. The experimental results on the Malimg dataset show that by a small interference, the proposed method can achieve success attack rate when challenging convolutional neural network malware classifiers.

Choi, Seok-Hwan, Shin, Jin-Myeong, Liu, Peng, Choi, Yoon-Ho.  2019.  Robustness Analysis of CNN-based Malware Family Classification Methods Against Various Adversarial Attacks. 2019 IEEE Conference on Communications and Network Security (CNS). :1—6.

As malware family classification methods, image-based classification methods have attracted much attention. Especially, due to the fast classification speed and the high classification accuracy, Convolutional Neural Network (CNN)-based malware family classification methods have been studied. However, previous studies on CNN-based classification methods focused only on improving the classification accuracy of malware families. That is, previous studies did not consider the cases that the accuracy of CNN-based malware classification methods can be decreased under the existence of adversarial attacks. In this paper, we analyze the robustness of various CNN-based malware family classification models under adversarial attacks. While adding imperceptible non-random perturbations to the input image, we measured how the accuracy of the CNN-based malware family classification model can be affected. Also, we showed the influence of three significant visualization parameters(i.e., the size of input image, dimension of input image, and conversion color of a special character)on the accuracy variation under adversarial attacks. From the evaluation results using the Microsoft malware dataset, we showed that even the accuracy over 98% of the CNN-based malware family classification method can be decreased to less than 7%.

2020-10-16
Al-Haj, Ali, Farfoura, Mahmoud.  2019.  Providing Security for E-Government Document Images Using Digital Watermarking in the Frequency Domain. 2019 5th International Conference on Information Management (ICIM). :77—81.

Many countries around the world have realized the benefits of the e-government platform in peoples' daily life, and accordingly have already made partial implementations of the key e-government processes. However, before full implementation of all potential services can be made, governments demand the deployment of effective information security measures to ensure secrecy and privacy of their citizens. In this paper, a robust watermarking algorithm is proposed to provide copyright protection for e-government document images. The proposed algorithm utilizes two transforms: the Discrete Wavelet Transformation (DWT) and the Singular Value Decomposition (SVD). Experimental results demonstrate that the proposed e-government document images watermarking algorithm performs considerably well compared to existing relevant algorithms.

2020-10-06
Baruah, Sanjoy, Burns, Alan.  2019.  Incorporating Robustness and Resilience into Mixed-Criticality Scheduling Theory. 2019 IEEE 22nd International Symposium on Real-Time Distributed Computing (ISORC). :155—162.

Mixed-criticality scheduling theory (MCSh) was developed to allow for more resource-efficient implementation of systems comprising different components that need to have their correctness validated at different levels of assurance. As originally defined, MCSh deals exclusively with pre-runtime verification of such systems; hence many mixed-criticality scheduling algorithms that have been developed tend to exhibit rather poor survivability characteristics during run-time. (E.g., MCSh allows for less-important (“Lo-criticality”) workloads to be completely discarded in the event that run-time behavior is not compliant with the assumptions under which the correctness of the LO-criticality workload should be verified.) Here we seek to extend MCSh to incorporate survivability considerations, by proposing quantitative metrics for the robustness and resilience of mixed-criticality scheduling algorithms. Such metrics allow us to make quantitative assertions regarding the survivability characteristics of mixed-criticality scheduling algorithms, and to compare different algorithms from the perspective of their survivability. We propose that MCSh seek to develop scheduling algorithms that possess superior survivability characteristics, thereby obtaining algorithms with better survivability properties than current ones (which, since they have been developed within a survivability-agnostic framework, tend to focus exclusively on pre-runtime verification and ignore survivability issues entirely).

2020-10-05
Mitra, Aritra, Abbas, Waseem, Sundaram, Shreyas.  2018.  On the Impact of Trusted Nodes in Resilient Distributed State Estimation of LTI Systems. 2018 IEEE Conference on Decision and Control (CDC). :4547—4552.

We address the problem of distributed state estimation of a linear dynamical process in an attack-prone environment. A network of sensors, some of which can be compromised by adversaries, aim to estimate the state of the process. In this context, we investigate the impact of making a small subset of the nodes immune to attacks, or “trusted”. Given a set of trusted nodes, we identify separate necessary and sufficient conditions for resilient distributed state estimation. We use such conditions to illustrate how even a small trusted set can achieve a desired degree of robustness (where the robustness metric is specific to the problem under consideration) that could otherwise only be achieved via additional measurement and communication-link augmentation. We then establish that, unfortunately, the problem of selecting trusted nodes is NP-hard. Finally, we develop an attack-resilient, provably-correct distributed state estimation algorithm that appropriately leverages the presence of the trusted nodes.

2020-09-21
Chow, Ka-Ho, Wei, Wenqi, Wu, Yanzhao, Liu, Ling.  2019.  Denoising and Verification Cross-Layer Ensemble Against Black-box Adversarial Attacks. 2019 IEEE International Conference on Big Data (Big Data). :1282–1291.
Deep neural networks (DNNs) have demonstrated impressive performance on many challenging machine learning tasks. However, DNNs are vulnerable to adversarial inputs generated by adding maliciously crafted perturbations to the benign inputs. As a growing number of attacks have been reported to generate adversarial inputs of varying sophistication, the defense-attack arms race has been accelerated. In this paper, we present MODEF, a cross-layer model diversity ensemble framework. MODEF intelligently combines unsupervised model denoising ensemble with supervised model verification ensemble by quantifying model diversity, aiming to boost the robustness of the target model against adversarial examples. Evaluated using eleven representative attacks on popular benchmark datasets, we show that MODEF achieves remarkable defense success rates, compared with existing defense methods, and provides a superior capability of repairing adversarial inputs and making correct predictions with high accuracy in the presence of black-box attacks.
2020-09-08
Kassim, Sarah, Megherbi, Ouerdia, Hamiche, Hamid, Djennoune, Saïd, Bettayeb, Maamar.  2019.  Speech encryption based on the synchronization of fractional-order chaotic maps. 2019 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT). :1–6.
This work presents a new method of encrypting and decrypting speech based on a chaotic key generator. The proposed scheme takes advantage of the best features of chaotic systems. In the proposed method, the input speech signal is converted into an image which is ciphered by an encryption function using a chaotic key matrix generated from a fractional-order chaotic map. Based on a deadbeat observer, the exact synchronization of system used is established, and the decryption is performed. Different analysis are applied for analyzing the effectiveness of the encryption system. The obtained results confirm that the proposed system offers a higher level of security against various attacks and holds a strong key generation mechanism for satisfactory speech communication.
2020-08-24
Noor, Joseph, Ali-Eldin, Ahmed, Garcia, Luis, Rao, Chirag, Dasari, Venkat R., Ganesan, Deepak, Jalaian, Brian, Shenoy, Prashant, Srivastava, Mani.  2019.  The Case for Robust Adaptation: Autonomic Resource Management is a Vulnerability. MILCOM 2019 - 2019 IEEE Military Communications Conference (MILCOM). :821–826.
Autonomic resource management for distributed edge computing systems provides an effective means of enabling dynamic placement and adaptation in the face of network changes, load dynamics, and failures. However, adaptation in-and-of-itself offers a side channel by which malicious entities can extract valuable information. An attacker can take advantage of autonomic resource management techniques to fool a system into misallocating resources and crippling applications. Using a few scenarios, we outline how attacks can be launched using partial knowledge of the resource management substrate - with as little as a single compromised node. We argue that any system that provides adaptation must consider resource management as an attack surface. As such, we propose ADAPT2, a framework that incorporates concepts taken from Moving-Target Defense and state estimation techniques to ensure correctness and obfuscate resource management, thereby protecting valuable system and application information from leaking.
2020-08-17
Kohnhäuser, Florian, Büscher, Niklas, Katzenbeisser, Stefan.  2019.  A Practical Attestation Protocol for Autonomous Embedded Systems. 2019 IEEE European Symposium on Security and Privacy (EuroS P). :263–278.
With the recent advent of the Internet of Things (IoT), embedded devices increasingly operate collaboratively in autonomous networks. A key technique to guard the secure and safe operation of connected embedded devices is remote attestation. It allows a third party, the verifier, to ensure the integrity of a remote device, the prover. Unfortunately, existing attestation protocols are impractical when applied in autonomous networks of embedded systems due to their limited scalability, performance, robustness, and security guarantees. In this work, we propose PASTA, a novel attestation protocol that is particularly suited for autonomous embedded systems. PASTA is the first that (i) enables many low-end prover devices to attest their integrity towards many potentially untrustworthy low-end verifier devices, (ii) is fully decentralized, thus, able to withstand network disruptions and arbitrary device outages, and (iii) is in addition to software attacks capable of detecting physical attacks in a much more robust way than any existing protocol. We implemented our protocol, conducted measurements, and simulated large networks. The results show that PASTA is practical on low-end embedded devices, scales to large networks with millions of devices, and improves robustness by multiple orders of magnitude compared with the best existing protocols.
2020-08-07
Pawlick, Jeffrey, Nguyen, Thi Thu Hang, Colbert, Edward, Zhu, Quanyan.  2019.  Optimal Timing in Dynamic and Robust Attacker Engagement During Advanced Persistent Threats. 2019 International Symposium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks (WiOPT). :1—8.
Advanced persistent threats (APTs) are stealthy attacks which make use of social engineering and deception to give adversaries insider access to networked systems. Against APTs, active defense technologies aim to create and exploit information asymmetry for defenders. In this paper, we study a scenario in which a powerful defender uses honeynets for active defense in order to observe an attacker who has penetrated the network. Rather than immediately eject the attacker, the defender may elect to gather information. We introduce an undiscounted, infinite-horizon Markov decision process on a continuous state space in order to model the defender's problem. We find a threshold of information that the defender should gather about the attacker before ejecting him. Then we study the robustness of this policy using a Stackelberg game. Finally, we simulate the policy for a conceptual network. Our results provide a quantitative foundation for studying optimal timing for attacker engagement in network defense.
2020-08-03
Zarazaga, Pablo Pérez, B¨ackström, Tom, Sigg, Stephan.  2019.  Robust and Responsive Acoustic Pairing of Devices Using Decorrelating Time-Frequency Modelling. 2019 27th European Signal Processing Conference (EUSIPCO). :1–5.
Voice user interfaces have increased in popularity, as they enable natural interaction with different applications using one's voice. To improve their usability and audio quality, several devices could interact to provide a unified voice user interface. However, with devices cooperating and sharing voice-related information, user privacy may be at risk. Therefore, access management rules that preserve user privacy are important. State-of-the-art methods for acoustic pairing of devices provide fingerprinting based on the time-frequency representation of the acoustic signal and error-correction. We propose to use such acoustic fingerprinting to authorise devices which are acoustically close. We aim to obtain fingerprints of ambient audio adapted to the requirements of voice user interfaces. Our experiments show that the responsiveness and robustness is improved by combining overlapping windows and decorrelating transforms.
2020-07-30
Ernawan, Ferda, Kabir, Muhammad Nomani.  2018.  A blind watermarking technique using redundant wavelet transform for copyright protection. 2018 IEEE 14th International Colloquium on Signal Processing Its Applications (CSPA). :221—226.
A digital watermarking technique is an alternative method to protect the intellectual property of digital images. This paper presents a hybrid blind watermarking technique formulated by combining RDWT with SVD considering a trade-off between imperceptibility and robustness. Watermark embedding locations are determined using a modified entropy of the host image. Watermark embedding is employed by examining the orthogonal matrix U obtained from the hybrid scheme RDWT-SVD. In the proposed scheme, the watermark image in binary format is scrambled by Arnold chaotic map to provide extra security. Our scheme is tested under different types of signal processing and geometrical attacks. The test results demonstrate that the proposed scheme provides higher robustness and less distortion than other existing schemes in withstanding JPEG2000 compression, cropping, scaling and other noises.
Xiao, Lijun, Huang, Weihong, Deng, Han, Xiao, Weidong.  2019.  A hardware intellectual property protection scheme based digital compression coding technology. 2019 IEEE International Conference on Smart Cloud (SmartCloud). :75—79.

This paper presents a scheme of intellectual property protection of hardware circuit based on digital compression coding technology. The aim is to solve the problem of high embedding cost and low resource utilization of IP watermarking. In this scheme, the watermark information is preprocessed by dynamic compression coding around the idle circuit of FPGA, and the free resources of the surrounding circuit are optimized that the IP watermark can get the best compression coding model while the extraction and detection of IP core watermark by activating the decoding function. The experimental results show that this method not only expands the capacity of watermark information, but also reduces the cost of watermark and improves the security and robustness of watermark algorithm.

2020-07-20
Sima, Mihai, Brisson, André.  2017.  Whitenoise encryption implementation with increased robustness to side-channel attacks. 2017 IEEE SmartWorld, Ubiquitous Intelligence Computing, Advanced Trusted Computed, Scalable Computing Communications, Cloud Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI). :1–4.
Two design techniques improve the robustness of Whitenoise encryption algorithm implementation to side-channel attacks based on dynamic and/or static power consumption. The first technique conceals the power consumption and has linear cost. The second technique randomizes the power consumption and has quadratic cost. These techniques are not mutually exclusive; their synergy provides a good robustness to power analysis attacks. Other circuit-level protection can be applied on top of the proposed techniques, opening the avenue for generating very robust implementations.
2020-07-03
Usama, Muhammad, Asim, Muhammad, Qadir, Junaid, Al-Fuqaha, Ala, Imran, Muhammad Ali.  2019.  Adversarial Machine Learning Attack on Modulation Classification. 2019 UK/ China Emerging Technologies (UCET). :1—4.

Modulation classification is an important component of cognitive self-driving networks. Recently many ML-based modulation classification methods have been proposed. We have evaluated the robustness of 9 ML-based modulation classifiers against the powerful Carlini & Wagner (C-W) attack and showed that the current ML-based modulation classifiers do not provide any deterrence against adversarial ML examples. To the best of our knowledge, we are the first to report the results of the application of the C-W attack for creating adversarial examples against various ML models for modulation classification.