Visible to the public Biblio

Found 721 results

Filters: Keyword is Computational modeling  [Clear All Filters]
2021-12-20
Khorasgani, Hamidreza Amini, Maji, Hemanta K., Wang, Mingyuan.  2021.  Optimally-secure Coin-tossing against a Byzantine Adversary. 2021 IEEE International Symposium on Information Theory (ISIT). :2858–2863.
Ben-Or and Linial (1985) introduced the full information model for coin-tossing protocols involving \$n\$ processors with unbounded computational power using a common broadcast channel for all their communications. For most adversarial settings, the characterization of the exact or asymptotically optimal protocols remains open. Furthermore, even for the settings where near-optimal asymptotic constructions are known, the exact constants or poly-logarithmic multiplicative factors involved are not entirely well-understood. This work studies \$n\$-processor coin-tossing protocols where every processor broadcasts an arbitrary-length message once. An adaptive Byzantine adversary, based on the messages broadcast so far, can corrupt \$k=1\$ processor. A bias-\$X\$ coin-tossing protocol outputs 1 with probability \$X\$; otherwise, it outputs 0 with probability (\$1-X\$). A coin-tossing protocol's insecurity is the maximum change in the output distribution (in the statistical distance) that a Byzantine adversary can cause. Our objective is to identify bias-\$X\$ coin-tossing protocols achieving near-optimal minimum insecurity for every \$Xın[0,1]\$. Lichtenstein, Linial, and Saks (1989) studied bias-\$X\$ coin-tossing protocols in this adversarial model where each party broadcasts an independent and uniformly random bit. They proved that the elegant “threshold coin-tossing protocols” are optimal for all \$n\$ and \$k\$. Furthermore, Goldwasser, Kalai, and Park (2015), Kalai, Komargodski, and Raz (2018), and Haitner and Karidi-Heller (2020) prove that \$k=\textbackslashtextbackslashmathcalO(\textbackslashtextbackslashsqrtn \textbackslashtextbackslashcdot \textbackslashtextbackslashmathsfpolylog(n)\$) corruptions suffice to fix the output of any bias-\$X\$ coin-tossing protocol. These results encompass parties who send arbitrary-length messages, and each processor has multiple turns to reveal its entire message. We use an inductive approach to constructing coin-tossing protocols using a potential function as a proxy for measuring any bias-\$X\$ coin-tossing protocol's susceptibility to attacks in our adversarial model. Our technique is inherently constructive and yields protocols that minimize the potential function. It is incidentally the case that the threshold protocols minimize the potential function, even for arbitrary-length messages. We demonstrate that these coin-tossing protocols' insecurity is a 2-approximation of the optimal protocol in our adversarial model. For any other \$Xın[0,1]\$ that threshold protocols cannot realize, we prove that an appropriate (convex) combination of the threshold protocols is a 4-approximation of the optimal protocol. Finally, these results entail new (vertex) isoperimetric inequalities for density-\$X\$ subsets of product spaces of arbitrary-size alphabets.
Buccafurri, Francesco, De Angelis, Vincenzo, Idone, Maria Francesca, Labrini, Cecilia.  2021.  A Distributed Location Trusted Service Achieving k-Anonymity against the Global Adversary. 2021 22nd IEEE International Conference on Mobile Data Management (MDM). :133–138.
When location-based services (LBS) are delivered, location data should be protected against honest-but-curious LBS providers, them being quasi-identifiers. One of the existing approaches to achieving this goal is location k-anonymity, which leverages the presence of a trusted party, called location trusted service (LTS), playing the role of anonymizer. A drawback of this approach is that the location trusted service is a single point of failure and traces all the users. Moreover, the protection is completely nullified if a global passive adversary is allowed, able to monitor the flow of messages, as the source of the query can be identified despite location k-anonymity. In this paper, we propose a distributed and hierarchical LTS model, overcoming both the above drawbacks. Moreover, position notification is used as cover traffic to hide queries and multicast is minimally adopted to hide responses, to keep k-anonymity also against the global adversary, thus enabling the possibility that LBS are delivered within social networks.
Janapriya, N., Anuradha, K., Srilakshmi, V..  2021.  Adversarial Deep Learning Models With Multiple Adversaries. 2021 Third International Conference on Inventive Research in Computing Applications (ICIRCA). :522–525.
Adversarial machine learning calculations handle adversarial instance age, producing bogus data information with the ability to fool any machine learning model. As the word implies, “foe” refers to a rival, whereas “rival” refers to a foe. In order to strengthen the machine learning models, this section discusses about the weakness of machine learning models and how effectively the misinterpretation occurs during the learning cycle. As definite as it is, existing methods such as creating adversarial models and devising powerful ML computations, frequently ignore semantics and the general skeleton including ML section. This research work develops an adversarial learning calculation by considering the coordinated portrayal by considering all the characteristics and Convolutional Neural Networks (CNN) explicitly. Figuring will most likely express minimal adjustments via data transport represented over positive and negative class markings, as well as a specific subsequent data flow misclassified by CNN. The final results recommend a certain game theory and formative figuring, which obtain incredible favored ensuring about significant learning models against the execution of shortcomings, which are reproduced as attack circumstances against various adversaries.
Silva, Douglas Simões, Graczyk, Rafal, Decouchant, Jérémie, Völp, Marcus, Esteves-Verissimo, Paulo.  2021.  Threat Adaptive Byzantine Fault Tolerant State-Machine Replication. 2021 40th International Symposium on Reliable Distributed Systems (SRDS). :78–87.
Critical infrastructures have to withstand advanced and persistent threats, which can be addressed using Byzantine fault tolerant state-machine replication (BFT-SMR). In practice, unattended cyberdefense systems rely on threat level detectors that synchronously inform them of changing threat levels. However, to have a BFT-SMR protocol operate unattended, the state-of-the-art is still to configure them to withstand the highest possible number of faulty replicas \$f\$ they might encounter, which limits their performance, or to make the strong assumption that a trusted external reconfiguration service is available, which introduces a single point of failure. In this work, we present ThreatAdaptive the first BFT-SMR protocol that is automatically strengthened or optimized by its replicas in reaction to threat level changes. We first determine under which conditions replicas can safely reconfigure a BFT-SMR system, i.e., adapt the number of replicas \$n\$ and the fault threshold \$f\$ so as to outpace an adversary. Since replicas typically communicate with each other using an asynchronous network they cannot rely on consensus to decide how the system should be reconfigured. ThreatAdaptive avoids this pitfall by proactively preparing the reconfiguration that may be triggered by an increasing threat when it optimizes its performance. Our evaluation shows that ThreatAdaptive can meet the latency and throughput of BFT baselines configured statically for a particular level of threat, and adapt 30% faster than previous methods, which make stronger assumptions to provide safety.
Liu, Jieling, Wang, Zhiliang, Yang, Jiahai, Wang, Bo, He, Lin, Song, Guanglei, Liu, Xinran.  2021.  Deception Maze: A Stackelberg Game-Theoretic Defense Mechanism for Intranet Threats. ICC 2021 - IEEE International Conference on Communications. :1–6.

The intranets in modern organizations are facing severe data breaches and critical resource misuses. By reusing user credentials from compromised systems, Advanced Persistent Threat (APT) attackers can move laterally within the internal network. A promising new approach called deception technology makes the network administrator (i.e., defender) able to deploy decoys to deceive the attacker in the intranet and trap him into a honeypot. Then the defender ought to reasonably allocate decoys to potentially insecure hosts. Unfortunately, existing APT-related defense resource allocation models are infeasible because of the neglect of many realistic factors.In this paper, we make the decoy deployment strategy feasible by proposing a game-theoretic model called the APT Deception Game to describe interactions between the defender and the attacker. More specifically, we decompose the decoy deployment problem into two subproblems and make the problem solvable. Considering the best response of the attacker who is aware of the defender’s deployment strategy, we provide an elitist reservation genetic algorithm to solve this game. Simulation results demonstrate the effectiveness of our deployment strategy compared with other heuristic strategies.

Najafi, Maryam, Khoukhi, Lyes, Lemercier, Marc.  2021.  A Multidimensional Trust Model for Vehicular Ad-Hoc Networks. 2021 IEEE 46th Conference on Local Computer Networks (LCN). :419–422.
In this paper, we propose a multidimensional trust model for vehicular networks. Our model evaluates the trustworthiness of each vehicle using two main modes: 1) Direct Trust Computation DTC related to a direct connection between source and target nodes, 2) Indirect Trust Computation ITC related to indirectly communication between source and target nodes. The principal characteristics of this model are flexibility and high fault tolerance, thanks to an automatic trust scores assessment. In our extensive simulations, we use Total Cost Rate to affirm the performance of the proposed trust model.
A, Sujan Reddy, Rudra, Bhawana.  2021.  Evaluation of Recurrent Neural Networks for Detecting Injections in API Requests. 2021 IEEE 11th Annual Computing and Communication Workshop and Conference (CCWC). :0936–0941.
Application programming interfaces (APIs) are a vital part of every online business. APIs are responsible for transferring data across systems within a company or to the users through the web or mobile applications. Security is a concern for any public-facing application. The objective of this study is to analyze incoming requests to a target API and flag any malicious activity. This paper proposes a solution using sequence models to identify whether or not an API request has SQL, XML, JSON, and other types of malicious injections. We also propose a novel heuristic procedure that minimizes the number of false positives. False positives are the valid API requests that are misclassified as malicious by the model.
2021-11-30
Pliatsios, Dimitrios, Sarigiannidis, Panagiotis, Efstathopoulos, Georgios, Sarigiannidis, Antonios, Tsiakalos, Apostolos.  2020.  Trust Management in Smart Grid: A Markov Trust Model. 2020 9th International Conference on Modern Circuits and Systems Technologies (MOCAST). :1–4.
By leveraging the advancements in Information and Communication Technologies (ICT), Smart Grid (SG) aims to modernize the traditional electric power grid towards efficient distribution and reliable management of energy in the electrical domain. The SG Advanced Metering Infrastructure (AMI) contains numerous smart meters, which are deployed throughout the distribution grid. However, these smart meters are susceptible to cyberthreats that aim to disrupt the normal operation of the SG. Cyberattacks can have various consequences in the smart grid, such as incorrect customer billing or equipment destruction. Therefore, these devices should operate on a trusted basis in order to ensure the availability, confidentiality, and integrity of the metering data. In this paper, we propose a Markov chain trust model that determines the Trust Value (TV) for each AMI device based on its behavior. Finally, numerical computations were carried out in order to investigate the reaction of the proposed model to the behavior changes of a device.
Subramanian, Vinod, Pankajakshan, Arjun, Benetos, Emmanouil, Xu, Ning, McDonald, SKoT, Sandler, Mark.  2020.  A Study on the Transferability of Adversarial Attacks in Sound Event Classification. ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :301–305.
An adversarial attack is an algorithm that perturbs the input of a machine learning model in an intelligent way in order to change the output of the model. An important property of adversarial attacks is transferability. According to this property, it is possible to generate adversarial perturbations on one model and apply it the input to fool the output of a different model. Our work focuses on studying the transferability of adversarial attacks in sound event classification. We are able to demonstrate differences in transferability properties from those observed in computer vision. We show that dataset normalization techniques such as z-score normalization does not affect the transferability of adversarial attacks and we show that techniques such as knowledge distillation do not increase the transferability of attacks.
2021-11-29
Yin, Yifei, Zulkernine, Farhana, Dahan, Samuel.  2020.  Determining Worker Type from Legal Text Data Using Machine Learning. 2020 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech). :444–450.
This project addresses a classic employment law question in Canada and elsewhere using machine learning approach: how do we know whether a worker is an employee or an independent contractor? This is a central issue for self-represented litigants insofar as these two legal categories entail very different rights and employment protections. In this interdisciplinary research study, we collaborated with the Conflict Analytics Lab to develop machine learning models aimed at determining whether a worker is an employee or an independent contractor. We present a number of supervised learning models including a neural network model that we implemented using data labeled by law researchers and compared the accuracy of the models. Our neural network model achieved an accuracy rate of 91.5%. A critical discussion follows to identify the key features in the data that influence the accuracy of our models and provide insights about the case outcomes.
Zhang, Qiang, Chai, Bo, Song, Bochuan, Zhao, Jingpeng.  2020.  A Hierarchical Fine-Tuning Based Approach for Multi-Label Text Classification. 2020 IEEE 5th International Conference on Cloud Computing and Big Data Analytics (ICCCBDA). :51–54.
Hierarchical Text classification has recently become increasingly challenging with the growing number of classification labels. In this paper, we propose a hierarchical fine-tuning based approach for hierarchical text classification. We use the ordered neurons LSTM (ONLSTM) model by combining the embedding of text and parent category for hierarchical text classification with a large number of categories, which makes full use of the connection between the upper-level and lower-level labels. Extensive experiments show that our model outperforms the state-of-the-art hierarchical model at a lower computation cost.
Somsakul, Supawit, Prom-on, Santitham.  2020.  On the Network and Topological Analyses of Legal Documents Using Text Mining Approach. 2020 1st International Conference on Big Data Analytics and Practices (IBDAP). :1–6.
This paper presents a computational study of Thai legal documents using text mining and network analytic approach. Thai legal systems rely much on the existing judicial rulings. Thus, legal documents contain complex relationships and require careful examination. The objective of this study is to use text mining to model relationships between these legal documents and draw useful insights. A structure of document relationship was found as a result of the study in forms of a network that is related to the meaningful relations of legal documents. This can potentially be developed further into a document retrieval system based on how documents are related in the network.
Song, ZHANG, Yang, Li, Gaoyang, LI, Han, YU, Baozhong, HAO, Jinwei, SONG, Jingang, FAN.  2020.  An Improved Data Provenance Framework Integrating Blockchain and PROV Model. 2020 International Conference on Computer Science and Management Technology (ICCSMT). :323–327.
Data tracing is an important topic in the era of digital economy when data are considered as one of the core factors in economic activities. However, the current data traceability systems often fail to obtain public trust due to their centralization and opaqueness. Blockchain possesses natural technical features such as data tampering resistance, anonymity, encryption security, etc., and shows great potential of improving the data tracing credibility. In this paper, we propose a blockchain-PROV-based multi-center data provenance solution in where the PROV model standardizes the data record storage and provenance on the blockchain automatically and intelligently. The solution improves the transparency and credibility of the provenance data, such as to help the efficient control and open sharing of data assets.
Braun, Sarah, Albrecht, Sebastian, Lucia, Sergio.  2020.  A Hierarchical Attack Identification Method for Nonlinear Systems. 2020 59th IEEE Conference on Decision and Control (CDC). :5035–5042.
Many autonomous control systems are frequently exposed to attacks, so methods for attack identification are crucial for a safe operation. To preserve the privacy of the subsystems and achieve scalability in large-scale systems, identification algorithms should not require global model knowledge. We analyze a previously presented method for hierarchical attack identification, that is embedded in a distributed control setup for systems of systems with coupled nonlinear dynamics. It is based on the exchange of local sensitivity information and ideas from sparse signal recovery. In this paper, we prove sufficient conditions under which the method is guaranteed to identify all components affected by some unknown attack. Even though a general class of nonlinear dynamic systems is considered, our rigorous theoretical guarantees are applicable to practically relevant examples, which is underlined by numerical experiments with the IEEE 30 bus power system.
Zhang, Lin, Chen, Xin, Kong, Fanxin, Cardenas, Alvaro A..  2020.  Real-Time Attack-Recovery for Cyber-Physical Systems Using Linear Approximations. 2020 IEEE Real-Time Systems Symposium (RTSS). :205–217.
Attack detection and recovery are fundamental elements for the operation of safe and resilient cyber-physical systems. Most of the literature focuses on attack-detection, while leaving attack-recovery as an open problem. In this paper, we propose novel attack-recovery control for securing cyber-physical systems. Our recovery control consists of new concepts required for a safe response to attacks, which includes the removal of poisoned data, the estimation of the current state, a prediction of the reachable states, and the online design of a new controller to recover the system. The synthesis of such recovery controllers for cyber-physical systems has barely investigated so far. To fill this void, we present a formal method-based approach to online compute a recovery control sequence that steers a system under an ongoing sensor attack from the current state to a target state such that no unsafe state is reachable on the way. The method solves a reach-avoid problem on a Linear Time-Invariant (LTI) model with the consideration of an error bound $ε$ $\geq$ 0. The obtained recovery control is guaranteed to work on the original system if the behavioral difference between the LTI model and the system's plant dynamics is not larger than $ε$. Since a recovery control should be obtained and applied at the runtime of the system, in order to keep its computational time cost as low as possible, our approach firstly builds a linear programming restriction with the accordingly constrained safety and target specifications for the given reach-avoid problem, and then uses a linear programming solver to find a solution. To demonstrate the effectiveness of our method, we provide (a) the comparison to the previous work over 5 system models under 3 sensor attack scenarios: modification, delay, and reply; (b) a scalability analysis based on a scalable model to evaluate the performance of our method on large-scale systems.
Yilmaz, Ibrahim, Siraj, Ambareen, Ulybyshev, Denis.  2020.  Improving DGA-Based Malicious Domain Classifiers for Malware Defense with Adversarial Machine Learning. 2020 IEEE 4th Conference on Information Communication Technology (CICT). :1–6.
Domain Generation Algorithms (DGAs) are used by adversaries to establish Command and Control (C&C) server communications during cyber attacks. Blacklists of known/identified C&C domains are used as one of the defense mechanisms. However, static blacklists generated by signature-based approaches can neither keep up nor detect never-seen-before malicious domain names. To address this weakness, we applied a DGA-based malicious domain classifier using the Long Short-Term Memory (LSTM) method with a novel feature engineering technique. Our model's performance shows a greater accuracy compared to a previously reported model. Additionally, we propose a new adversarial machine learning-based method to generate never-before-seen malware-related domain families. We augment the training dataset with new samples to make the training of the models more effective in detecting never-before-seen malicious domain names. To protect blacklists of malicious domain names against adversarial access and modifications, we devise secure data containers to store and transfer blacklists.
Raich, Philipp, Kastner, Wolfgang.  2021.  A Computational Model for 6LoWPAN Multicast Routing. 2021 17th IEEE International Conference on Factory Communication Systems (WFCS). :143–146.
Reliable group communication is an important cornerstone for various applications in the domain of Industrial Internet of Things (IIoT). Yet, despite various proposals, state-of- the-art (open) protocol stacks for IPv6-enabled Low Power and Lossy Networks (LLNs) have little to offer, regarding standardized or agreed-upon protocols for correct multicast routing, not to mention reliable multicast. We present an informal computational model, which allows us to analyze the respective candidates for multicast routing. Further, we focus on the IEEE 802.15.4/6LoWPAN stack and discuss prominent multicast routing protocols and how they fit into this model.
Sagar, Subhash, Mahmood, Adnan, Sheng, Quan Z., Zhang, Wei Emma.  2020.  Trust Computational Heuristic for Social Internet of Things: A Machine Learning-Based Approach. ICC 2020 - 2020 IEEE International Conference on Communications (ICC). :1–6.
The Internet of Things (IoT) is an evolving network of billions of interconnected physical objects, such as, numerous sensors, smartphones, wearables, and embedded devices. These physical objects, generally referred to as the smart objects, when deployed in real-world aggregates useful information from their surrounding environment. As-of-late, this notion of IoT has been extended to incorporate the social networking facets which have led to the promising paradigm of the `Social Internet of Things' (SIoT). In SIoT, the devices operate as an autonomous agent and provide an exchange of information and services discovery in an intelligent manner by establishing social relationships among them with respect to their owners. Trust plays an important role in establishing trustworthy relationships among the physical objects and reduces probable risks in the decision making process. In this paper, a trust computational model is proposed to extract individual trust features in a SIoT environment. Furthermore, a machine learning-based heuristic is used to aggregate all the trust features in order to ascertain an aggregate trust score. Simulation results illustrate that the proposed trust-based model isolates the trustworthy and untrustworthy nodes within the network in an efficient manner.
Wen, Guanghui, Lv, Yuezu, Zhou, Jialing, Fu, Junjie.  2020.  Sufficient and Necessary Condition for Resilient Consensus under Time-Varying Topologies. 2020 7th International Conference on Information, Cybernetics, and Computational Social Systems (ICCSS). :84–89.
Although quite a few results on resilient consensus of multi-agent systems with malicious agents and fixed topology have been reported in the literature, we lack any known results on such a problem for multi-agent systems with time-varying topologies. Herein, we study the resilient consensus problem of time-varying networked systems in the presence of misbehaving nodes. A novel concept of joint ( r, s) -robustness is firstly proposed to characterize the robustness of the time-varying topologies. It is further revealed that the resilient consensus of multi-agent systems under F-total malicious network can be reached by the Weighted Mean-Subsequence-Reduced algorithm if and only if the time-varying graph is jointly ( F+1, F+1) -robust. Numerical simulations are finally performed to verify the effectiveness of the analytical results.
Lata, Kiran, Ahmad, Salim, Kumar, Sanjeev, Singh, Deepali.  2020.  Cloud Agent-Based Encryption Mechanism (CAEM): A Security Framework Model for Improving Adoption, Implementation and Usage of Cloud Computing Technology. 2020 International Conference on Advances in Computing, Communication Materials (ICACCM). :99–104.
Fast Growth of (ICT) Information and Communication Technology results to Innovation of Cloud Computing and is considered as a key driver for technological innovations, as an IT innovations, cloud computing had added a new dimension to that importance by increasing usage to technology that motivates economic development at the national and global levels. Continues need of higher storage space (applications, files, videos, music and others) are some of the reasons for adoption and implementation, Users and Enterprises are gradually changing the way and manner in which Data and Information are been stored. Storing/Retrieving Data and Information traditionally using Standalone Computers are no longer sustainable due to high cost of Peripheral Devices, This further recommends organizational innovative adoption with regards to approaches on how to effectively reduced cost in businesses. Cloud Computing provides a lot of prospects to users/organizations; it also exposes security concerns which leads to low adoption, implementation and usage. Therefore, the study will examine standard ways of improving cloud computing adoption, implementation and usage by proposing and developing a security model using a design methodology that will ensure a secured Cloud Computing and also identify areas where future regularization could be operational.
Mizuta, Takanobu.  2020.  How Many Orders Does a Spoofer Need? - Investigation by Agent-Based Model - 2020 7th International Conference on Behavioural and Social Computing (BESC). :1–4.
Most financial markets prohibit unfair trades as they reduce efficiency and diminish the integrity of the market. Spoofers place orders they have no intention of trading in order to manipulate market prices and profit illegally. Most financial markets prohibit such spoofing orders; however, further clarification is still needed regarding how many orders a spoofer needs to place in order to manipulate market prices and profit. In this study I built an artificial market model (an agent-based model for financial markets) to show how unbalanced buy and sell orders affect the expected returns, and I implemented the spoofer agent in the model. I then investigated how many orders the spoofer needs to place in order to manipulate market prices and profit illegally. The results indicate that showing more spoofing orders than waiting orders in the order book enables the spoofer to earn illegally, amplifies price fluctuation, and reduces the efficiency of the market.
2021-11-08
Afroz, Sabrina, Ariful Islam, S.M, Nawer Rafa, Samin, Islam, Maheen.  2020.  A Two Layer Machine Learning System for Intrusion Detection Based on Random Forest and Support Vector Machine. 2020 IEEE International Women in Engineering (WIE) Conference on Electrical and Computer Engineering (WIECON-ECE). :300–303.
Unauthorized access or intrusion is a massive threatening issue in the modern era. This study focuses on designing a model for an ideal intrusion detection system capable of defending a network by alerting the admins upon detecting any sorts of malicious activities. The study proposes a two layered anomaly-based detection model that uses filter co-relation method for dimensionality reduction along with Random forest and Support Vector Machine as its classifiers. It achieved a very good detection rate against all sorts of attacks including a low rate of false alarms as well. The contribution of this study is that it could be of a major help to the computer scientists designing good intrusion detection systems to keep an industry or organization safe from the cyber threats as it has achieved the desired qualities of a functional IDS model.
Singh, Juhi, Sharmila, V Ceronmani.  2020.  Detecting Trojan Attacks on Deep Neural Networks. 2020 4th International Conference on Computer, Communication and Signal Processing (ICCCSP). :1–5.
Machine learning and Artificial Intelligent techniques are the most used techniques. It gives opportunity to online sharing market where sharing and adopting model is being popular. It gives attackers many new opportunities. Deep neural network is the most used approached for artificial techniques. In this paper we are presenting a Proof of Concept method to detect Trojan attacks on the Deep Neural Network. Deploying trojan models can be dangerous in normal human lives (Application like Automated vehicle). First inverse the neuron network to create general trojan triggers, and then retrain the model with external datasets to inject Trojan trigger to the model. The malicious behaviors are only activated with the trojan trigger Input. In attack, original datasets are not required to train the model. In practice, usually datasets are not shared due to privacy or copyright concerns. We use five different applications to demonstrate the attack, and perform an analysis on the factors that affect the attack. The behavior of a trojan modification can be triggered without affecting the test accuracy for normal input datasets. After generating the trojan trigger and performing an attack. It's applying SHAP as defense against such attacks. SHAP is known for its unique explanation for model predictions.
2021-10-12
Gouk, Henry, Hospedales, Timothy M..  2020.  Optimising Network Architectures for Provable Adversarial Robustness. 2020 Sensor Signal Processing for Defence Conference (SSPD). :1–5.
Existing Lipschitz-based provable defences to adversarial examples only cover the L2 threat model. We introduce the first bound that makes use of Lipschitz continuity to provide a more general guarantee for threat models based on any Lp norm. Additionally, a new strategy is proposed for designing network architectures that exhibit superior provable adversarial robustness over conventional convolutional neural networks. Experiments are conducted to validate our theoretical contributions, show that the assumptions made during the design of our novel architecture hold in practice, and quantify the empirical robustness of several Lipschitz-based adversarial defence methods.
Vinarskii, Evgenii, Demakov, Alexey, Kamkin, Alexander, Yevtushenko, Nina.  2020.  Verifying cryptographic protocols by Tamarin Prover. 2020 Ivannikov Memorial Workshop (IVMEM). :69–75.
Cryptographic protocols are utilized for establishing a secure session between “honest” agents which communicate strictly according to the protocol rules as well as for ensuring the authenticated and confidential transmission of messages. The specification of a cryptographic protocol is usually presented as a set of requirements for the sequences of transmitted messages including the format of such messages. Note that protocol can describe several execution scenarios. All these requirements lead to a huge formal specification for a real cryptographic protocol and therefore, it is difficult to verify the security of the whole cryptographic protocol at once. In this paper, to overcome this problem, we suggest verifying the protocol security for its fragments. Namely, we verify the security properties for a special set of so-called traces of the cryptographic protocol. Intuitively, a trace of the cryptographic protocol is a sequence of computations, value checks, and transmissions on the sides of “honest” agents permitted by the protocol. In order to choose such set of traces, we introduce an Adversary model and the notion of a similarity relation for traces. We then verify the security properties of selected traces with Tamarin Prover. Experimental results for the EAP and Noise protocols clearly show that this approach can be promising for automatic verification of large protocols.