Khorasgani, Hamidreza Amini, Maji, Hemanta K., Wang, Mingyuan.
2021.
Optimally-secure Coin-tossing against a Byzantine Adversary. 2021 IEEE International Symposium on Information Theory (ISIT). :2858–2863.
Ben-Or and Linial (1985) introduced the full information model for coin-tossing protocols involving \$n\$ processors with unbounded computational power using a common broadcast channel for all their communications. For most adversarial settings, the characterization of the exact or asymptotically optimal protocols remains open. Furthermore, even for the settings where near-optimal asymptotic constructions are known, the exact constants or poly-logarithmic multiplicative factors involved are not entirely well-understood. This work studies \$n\$-processor coin-tossing protocols where every processor broadcasts an arbitrary-length message once. An adaptive Byzantine adversary, based on the messages broadcast so far, can corrupt \$k=1\$ processor. A bias-\$X\$ coin-tossing protocol outputs 1 with probability \$X\$; otherwise, it outputs 0 with probability (\$1-X\$). A coin-tossing protocol's insecurity is the maximum change in the output distribution (in the statistical distance) that a Byzantine adversary can cause. Our objective is to identify bias-\$X\$ coin-tossing protocols achieving near-optimal minimum insecurity for every \$Xın[0,1]\$. Lichtenstein, Linial, and Saks (1989) studied bias-\$X\$ coin-tossing protocols in this adversarial model where each party broadcasts an independent and uniformly random bit. They proved that the elegant “threshold coin-tossing protocols” are optimal for all \$n\$ and \$k\$. Furthermore, Goldwasser, Kalai, and Park (2015), Kalai, Komargodski, and Raz (2018), and Haitner and Karidi-Heller (2020) prove that \$k=\textbackslashtextbackslashmathcalO(\textbackslashtextbackslashsqrtn \textbackslashtextbackslashcdot \textbackslashtextbackslashmathsfpolylog(n)\$) corruptions suffice to fix the output of any bias-\$X\$ coin-tossing protocol. These results encompass parties who send arbitrary-length messages, and each processor has multiple turns to reveal its entire message. We use an inductive approach to constructing coin-tossing protocols using a potential function as a proxy for measuring any bias-\$X\$ coin-tossing protocol's susceptibility to attacks in our adversarial model. Our technique is inherently constructive and yields protocols that minimize the potential function. It is incidentally the case that the threshold protocols minimize the potential function, even for arbitrary-length messages. We demonstrate that these coin-tossing protocols' insecurity is a 2-approximation of the optimal protocol in our adversarial model. For any other \$Xın[0,1]\$ that threshold protocols cannot realize, we prove that an appropriate (convex) combination of the threshold protocols is a 4-approximation of the optimal protocol. Finally, these results entail new (vertex) isoperimetric inequalities for density-\$X\$ subsets of product spaces of arbitrary-size alphabets.
Buccafurri, Francesco, De Angelis, Vincenzo, Idone, Maria Francesca, Labrini, Cecilia.
2021.
A Distributed Location Trusted Service Achieving k-Anonymity against the Global Adversary. 2021 22nd IEEE International Conference on Mobile Data Management (MDM). :133–138.
When location-based services (LBS) are delivered, location data should be protected against honest-but-curious LBS providers, them being quasi-identifiers. One of the existing approaches to achieving this goal is location k-anonymity, which leverages the presence of a trusted party, called location trusted service (LTS), playing the role of anonymizer. A drawback of this approach is that the location trusted service is a single point of failure and traces all the users. Moreover, the protection is completely nullified if a global passive adversary is allowed, able to monitor the flow of messages, as the source of the query can be identified despite location k-anonymity. In this paper, we propose a distributed and hierarchical LTS model, overcoming both the above drawbacks. Moreover, position notification is used as cover traffic to hide queries and multicast is minimally adopted to hide responses, to keep k-anonymity also against the global adversary, thus enabling the possibility that LBS are delivered within social networks.
Janapriya, N., Anuradha, K., Srilakshmi, V..
2021.
Adversarial Deep Learning Models With Multiple Adversaries. 2021 Third International Conference on Inventive Research in Computing Applications (ICIRCA). :522–525.
Adversarial machine learning calculations handle adversarial instance age, producing bogus data information with the ability to fool any machine learning model. As the word implies, “foe” refers to a rival, whereas “rival” refers to a foe. In order to strengthen the machine learning models, this section discusses about the weakness of machine learning models and how effectively the misinterpretation occurs during the learning cycle. As definite as it is, existing methods such as creating adversarial models and devising powerful ML computations, frequently ignore semantics and the general skeleton including ML section. This research work develops an adversarial learning calculation by considering the coordinated portrayal by considering all the characteristics and Convolutional Neural Networks (CNN) explicitly. Figuring will most likely express minimal adjustments via data transport represented over positive and negative class markings, as well as a specific subsequent data flow misclassified by CNN. The final results recommend a certain game theory and formative figuring, which obtain incredible favored ensuring about significant learning models against the execution of shortcomings, which are reproduced as attack circumstances against various adversaries.
Silva, Douglas Simões, Graczyk, Rafal, Decouchant, Jérémie, Völp, Marcus, Esteves-Verissimo, Paulo.
2021.
Threat Adaptive Byzantine Fault Tolerant State-Machine Replication. 2021 40th International Symposium on Reliable Distributed Systems (SRDS). :78–87.
Critical infrastructures have to withstand advanced and persistent threats, which can be addressed using Byzantine fault tolerant state-machine replication (BFT-SMR). In practice, unattended cyberdefense systems rely on threat level detectors that synchronously inform them of changing threat levels. However, to have a BFT-SMR protocol operate unattended, the state-of-the-art is still to configure them to withstand the highest possible number of faulty replicas \$f\$ they might encounter, which limits their performance, or to make the strong assumption that a trusted external reconfiguration service is available, which introduces a single point of failure. In this work, we present ThreatAdaptive the first BFT-SMR protocol that is automatically strengthened or optimized by its replicas in reaction to threat level changes. We first determine under which conditions replicas can safely reconfigure a BFT-SMR system, i.e., adapt the number of replicas \$n\$ and the fault threshold \$f\$ so as to outpace an adversary. Since replicas typically communicate with each other using an asynchronous network they cannot rely on consensus to decide how the system should be reconfigured. ThreatAdaptive avoids this pitfall by proactively preparing the reconfiguration that may be triggered by an increasing threat when it optimizes its performance. Our evaluation shows that ThreatAdaptive can meet the latency and throughput of BFT baselines configured statically for a particular level of threat, and adapt 30% faster than previous methods, which make stronger assumptions to provide safety.
Liu, Jieling, Wang, Zhiliang, Yang, Jiahai, Wang, Bo, He, Lin, Song, Guanglei, Liu, Xinran.
2021.
Deception Maze: A Stackelberg Game-Theoretic Defense Mechanism for Intranet Threats. ICC 2021 - IEEE International Conference on Communications. :1–6.
The intranets in modern organizations are facing severe data breaches and critical resource misuses. By reusing user credentials from compromised systems, Advanced Persistent Threat (APT) attackers can move laterally within the internal network. A promising new approach called deception technology makes the network administrator (i.e., defender) able to deploy decoys to deceive the attacker in the intranet and trap him into a honeypot. Then the defender ought to reasonably allocate decoys to potentially insecure hosts. Unfortunately, existing APT-related defense resource allocation models are infeasible because of the neglect of many realistic factors.In this paper, we make the decoy deployment strategy feasible by proposing a game-theoretic model called the APT Deception Game to describe interactions between the defender and the attacker. More specifically, we decompose the decoy deployment problem into two subproblems and make the problem solvable. Considering the best response of the attacker who is aware of the defender’s deployment strategy, we provide an elitist reservation genetic algorithm to solve this game. Simulation results demonstrate the effectiveness of our deployment strategy compared with other heuristic strategies.
Najafi, Maryam, Khoukhi, Lyes, Lemercier, Marc.
2021.
A Multidimensional Trust Model for Vehicular Ad-Hoc Networks. 2021 IEEE 46th Conference on Local Computer Networks (LCN). :419–422.
In this paper, we propose a multidimensional trust model for vehicular networks. Our model evaluates the trustworthiness of each vehicle using two main modes: 1) Direct Trust Computation DTC related to a direct connection between source and target nodes, 2) Indirect Trust Computation ITC related to indirectly communication between source and target nodes. The principal characteristics of this model are flexibility and high fault tolerance, thanks to an automatic trust scores assessment. In our extensive simulations, we use Total Cost Rate to affirm the performance of the proposed trust model.
A, Sujan Reddy, Rudra, Bhawana.
2021.
Evaluation of Recurrent Neural Networks for Detecting Injections in API Requests. 2021 IEEE 11th Annual Computing and Communication Workshop and Conference (CCWC). :0936–0941.
Application programming interfaces (APIs) are a vital part of every online business. APIs are responsible for transferring data across systems within a company or to the users through the web or mobile applications. Security is a concern for any public-facing application. The objective of this study is to analyze incoming requests to a target API and flag any malicious activity. This paper proposes a solution using sequence models to identify whether or not an API request has SQL, XML, JSON, and other types of malicious injections. We also propose a novel heuristic procedure that minimizes the number of false positives. False positives are the valid API requests that are misclassified as malicious by the model.