Biblio
Trusted collaboration satisfying the requirements of (a) adequate transparency and (b) preservation of privacy of business sensitive information is a key factor to ensure the success and adoption of online business-to-business (B2B) collaboration platforms. Our work proposes novel ways of stringing together game theoretic modeling, blockchain technology, and cryptographic techniques to build such a platform for B2B collaboration involving enterprise buyers and sellers who may be strategic. The B2B platform builds upon three ideas. The first is to use a permissioned blockchain with smart contracts as the technical infrastructure for building the platform. Second, the above smart contracts implement deep business logic which is derived using a rigorous analysis of a repeated game model of the strategic interactions between buyers and sellers to devise strategies to induce honest behavior from buyers and sellers. Third, we present a formal framework that captures the essential requirements for secure and private B2B collaboration, and, in this direction, we develop cryptographic regulation protocols that, in conjunction with the blockchain, help implement such a framework. We believe our work is an important first step in the direction of building a platform that enables B2B collaboration among strategic and competitive agents while maximizing social welfare and addressing the privacy concerns of the agents.
Trust is known to be a key component in human social relationships. It is trust that defines human behavior with others to a large extent. Generative models have been extensively used in social networks study to simulate different characteristics and phenomena in social graphs. In this work, an attempt is made to understand how trust in social graphs can be combined with generative modeling techniques to generate trust-based social graphs. These generated social graphs are then compared with the original social graphs to evaluate how trust helps in generative modeling. Two well-known social network data sets i.e. the soc-Bitcoin and the wiki administrator network data sets are used in this work. Social graphs are generated from these data sets and then compared with the original graphs along with other standard generative modeling techniques to see how trust is a good component in this. Other Generative modeling techniques have been available for a while but this investigation with the real social graph data sets validate that trust can be an important factor in generative modeling.
Nowadays, Microblog has become an important online social networking platform, and a large number of users share information through Microblog. Many malicious users have released various false news driven by various interests, which seriously affects the availability of Microblog platform. Therefore, the evaluation of Microblog user credibility has become an important research issue. This paper proposes a microblog user credibility evaluation algorithm based on trust propagation. In view of the high consumption and low precision caused by malicious users' attacking algorithms and manual selection of seed sets by establishing false social relationships, this paper proposes two optimization strategies: pruning algorithm based on social activity and similarity and based on The seed node selection algorithm of clustering. The pruning algorithm can trim off the attack edges established by malicious users and normal users. The seed node selection algorithm can efficiently select the highly available seed node set, and finally use the user social relationship graph to perform the two-way propagation trust scoring, so that the low trusted user has a lower trusted score and thus identifies the malicious user. The related experiments verify the effectiveness of the trustworthiness-based user credibility evaluation algorithm in the evaluation of Microblog user credibility.
We propose a distributed machine-learning architecture to predict trustworthiness of sensor services in Mobile Edge Computing (MEC) based Internet of Things (IoT) services, which aligns well with the goals of MEC and requirements of modern IoT systems. The proposed machine-learning architecture models training a distributed trust prediction model over a topology of MEC-environments as a Network Lasso problem, which allows simultaneous clustering and optimization on large-scale networked-graphs. We then attempt to solve it using Alternate Direction Method of Multipliers (ADMM) in a way that makes it suitable for MEC-based IoT systems. We present analytical and simulation results to show the validity and efficiency of the proposed solution.
Verifying complex Cyber-Physical Systems (CPS) is increasingly important given the push to deploy safety-critical autonomous features. Unfortunately, traditional verification methods do not scale to the complexity of these systems and do not provide systematic methods to protect verified properties when not all the components can be verified. To address these challenges, this paper proposes a real-time mixed-trust computing framework that combines verification and protection. The framework introduces a new task model, where an application task can have both an untrusted and a trusted part. The untrusted part allows complex computations supported by a full OS with a realtime scheduler running in a VM hosted by a trusted hypervisor. The trusted part is executed by another scheduler within the hypervisor and is thus protected from the untrusted part. If the untrusted part fails to finish by a specific time, the trusted part is activated to preserve safety (e.g., prevent a crash) including its timing guarantees. This framework is the first allowing the use of untrusted components for CPS critical functions while preserving logical and timing guarantees, even in the presence of malicious attackers. We present the framework design and implementation along with the schedulability analysis and the coordination protocol between the trusted and untrusted parts. We also present our Raspberry Pi 3 implementation along with experiments showing the behavior of the system under failures of untrusted components, and a drone application to demonstrate its practicality.
Malware is pervasive and poses serious threats to normal operation of business processes in cloud. Cloud computing environments typically have hundreds of hosts that are connected to each other, often with high risk trust assumptions and/or protection mechanisms that are not difficult to break. Malware often exploits such weaknesses, as its immediate goal is often to spread itself to as many hosts as possible. Detecting this propagation is often difficult to address because the malware may reside in multiple components across the software or hardware stack. In this scenario, it is usually best to contain the malware to the smallest possible number of hosts, and it's also critical for system administration to resolve the issue in a timely manner. Furthermore, resolution often requires that several participants across different organizational teams scramble together to address the intrusion. In this vision paper, we define this problem in detail. We then present our vision of decentralized malware containment and the challenges and issues associated with this vision. The approach of containment involves detection and response using graph analytics coupled with a blockchain framework. We propose the use of a dominance frontier for profile nodes which must be involved in the containment process. Smart contracts are used to obtain consensus amongst the involved parties. The paper presents a basic implementation of this proposal. We have further discussed some open problems related to our vision.
It is notably challenging to design an efficient and secure signature scheme based on error-correcting codes. An approach to build such signature schemes is to derive it from an identification protocol through the Fiat-Shamir transform. All such protocols based on codes must be run several rounds, since each run of the protocol allows a cheating probability of either 2/3 or 1/2. The resulting signature size is proportional to the number of rounds, thus making the 1/2 cheating probability version more attractive. We present a signature scheme based on double circulant codes in the rank metric, derived from an identification protocol with cheating probability of 2/3. We reduced this probability to almost 1/2 to obtain the smallest signature among code-based signature schemes based on the Fiat-Shamir paradigm, around 22 KBytes for 128 bit security level. Furthermore, among all code-based signature schemes, our proposal has the lowest value of signature plus public key size, and the smallest secret and public key sizes. We provide a security proof in the Random Oracle Model, implementation performances, and a comparison with the parameters of similar signature schemes.
The proliferation of IoT devices in smart homes, hospitals, and enterprise networks is wide-spread and continuing to increase in a superlinear manner. The question is: how can one assess the security of an IoT network in a holistic manner? In this paper, we have explored two dimensions of security assessment- using vulnerability information and attack vectors of IoT devices and their underlying components (compositional security scores) and using SIEM logs captured from the communications and operations of such devices in a network (dynamic activity metrics). These measures are used to evaluate the security of IoT devices and the overall IoT network, demonstrating the effectiveness of attack circuits as practical tools for computing security metrics (exploitability, impact, and risk to confidentiality, integrity, and availability) of the network. We decided to approach threat modeling using attack graphs. To that end, we propose the notion of attack circuits, which are generated from input/output pairs constructed from CVEs using NLP, and an attack graph composed of these circuits. Our system provides insight into possible attack paths an adversary may utilize based on their exploitability, impact, or overall risk. We have performed experiments on IoT networks to demonstrate the efficacy of the proposed techniques.
Realizing the importance of the concept of “smart city” and its impact on the quality of life, many infrastructures, such as power plants, began their digital transformation process by leveraging modern computing and advanced communication technologies. Unfortunately, by increasing the number of connections, power plants become more and more vulnerable and also an attractive target for cyber-physical attacks. The analysis of interdependencies among system components reveals interdependent connections, and facilitates the identification of those among them that are in need of special protection. In this paper, we review the recent literature which utilizes graph-based models and network-based models to study these interdependencies. A comprehensive overview, based on the main features of the systems including communication direction, control parameters, research target, scalability, security and safety, is presented. We also assess the computational complexity associated with the approaches presented in the reviewed papers, and we use this metric to assess the scalability of the approaches.
Common vulnerability scoring system (CVSS) is an industry standard that can assess the vulnerability of nodes in traditional computer systems. The metrics computed by CVSS would determine critical nodes and attack paths. However, traditional IT security models would not fit IoT embedded networks due to distinct nature and unique characteristics of IoT systems. This paper analyses the application of CVSS for IoT embedded systems and proposes an improved vulnerability scoring system based on CVSS v3 framework. The proposed framework, named CVSSIoT, is applied to a realistic IT supply chain system and the results are compared with the actual vulnerabilities from the national vulnerability database. The comparison result validates the proposed model. CVSSIoT is not only effective, simple and capable of vulnerability evaluation for traditional IT system, but also exploits unique characteristics of IoT devices.
This article discusses existing approaches to building regional scale networks. Authors offer a mathematical model of network growth process, on the basis of which simulation is performed. The availability characteristic is used as criterion for measuring optimality. This report describes the mechanism for measuring network availability and contains propositions to make changes to the procedure for designing of regional networks, which can improve its qualitative characteristics. The efficiency of changes is confirmed by simulation.
In this paper, we propose a new method for optimizing the deployment of security solutions within an IoT network. Our approach uses dominating sets and centrality metrics to propose an IoT security framework where security functions are optimally deployed among devices. An example of such a solution is presented based on EndToEnd like encryption. The results reveal overall increased security within the network with minimal impact on the traffic.
In this paper, the cybersecurity of distributed secondary voltage control of AC microgrids is addressed. A resilient approach is proposed to mitigate the negative impacts of cyberthreats on the voltage and reactive power control of Distributed Energy Resources (DERs). The proposed secondary voltage control is inspired by the resilient flocking of a mobile robot team. This approach utilizes a virtual time-varying communication graph in which the quality of the communication links is virtualized and determined based on the synchronization behavior of DERs. The utilized control protocols on DERs ensure that the connectivity of the virtual communication graph is above a specific resilience threshold. Once the resilience threshold is satisfied the Weighted Mean Subsequence Reduced (WMSR) algorithm is applied to satisfy voltage restoration in the presence of malicious adversaries. A typical microgrid test system including 6 DERs is simulated to verify the validity of proposed resilient control approach.
We propose a method to maintain high resource availability in a networked heterogeneous multi-robot system subject to resource failures. In our model, resources such as sensing and computation are available on robots. The robots are engaged in a joint task using these pooled resources. When a resource on a particular robot becomes unavailable (e.g., a sensor ceases to function), the system automatically reconfigures so that the robot continues to have access to this resource by communicating with other robots. Specifically, we consider the problem of selecting edges to be modified in the system's communication graph after a resource failure has occurred. We define a metric that allows us to characterize the quality of the resource distribution in the network represented by the communication graph. Upon a resource becoming unavailable due to failure, we reconFigure the network so that the resource distribution is brought as close to the maximal resource distribution as possible without a large change in the number of active inter-robot communication links. Our approach uses mixed integer semi-definite programming to achieve this goal. We employ a simulated annealing method to compute a spatial formation that satisfies the inter-robot distances imposed by the topology, along with other constraints. Our method can compute a communication topology, spatial formation, and formation change motion planning in a few seconds. We validate our method in simulation and real-robot experiments with a team of seven quadrotors.
Smart technologies at hand have facilitated generation and collection of huge volumes of data, on daily basis. It involves highly sensitive and diverse data like personal, organisational, environment, energy, transport and economic data. Data Analytics provide solution for various issues being faced by smart cities like crisis response, disaster resilience, emergence management, smart traffic management system etc.; it requires distribution of sensitive data among various entities within or outside the smart city,. Sharing of sensitive data creates a need for efficient usage of smart city data to provide smart applications and utility to the end users in a trustworthy and safe mode. This shared sensitive data if get leaked as a consequence can cause damage and severe risk to the city's resources. Fortification of critical data from unofficial disclosure is biggest issue for success of any project. Data Leakage Detection provides a set of tools and technology that can efficiently resolves the concerns related to smart city critical data. The paper, showcase an approach to detect the leakage which is caused intentionally or unintentionally. The model represents allotment of data objects between diverse agents using Bigraph. The objective is to make critical data secure by revealing the guilty agent who caused the data leakage.
With rapid growth of network size and complexity, network defenders are facing more challenges in protecting networked computers and other devices from acute attacks. Traffic visualization is an essential element in an anomaly detection system for visual observations and detection of distributed DoS attacks. This paper presents an interactive visualization system called TVis, proposed to detect both low-rate and highrate DDoS attacks using Heron's triangle-area mapping. TVis allows network defenders to identify and investigate anomalies in internal and external network traffic at both online and offline modes. We model the network traffic as an undirected graph and compute triangle-area map based on incidences at each vertex for each 5 seconds time window. The system triggers an alarm iff the system finds an area of the mapped triangle beyond the dynamic threshold. TVis performs well for both low-rate and high-rate DDoS detection in comparison to its competitors.
The article analyzes the concept of "Resilience" in relation to the development of computing. The strategy for reacting to perturbations in this process can be based either on "harsh Resistance" or "smarter Elasticity." Our "Models" are descriptive in defining the path of evolutionary development as structuring under the perturbations of the natural order and enable the analysis of the relationship among models, structures and factors of evolution. Among those, two features are critical: parallelism and "fuzziness", which to a large extent determine the rate of change of computing development, especially in critical applications. Both reversible and irreversible development processes related to elastic and resistant methods of problem solving are discussed. The sources of perturbations are located in vicinity of the resource boundaries, related to growing problem size with progress combined with the lack of computational "checkability" of resources i.e. data with inadequate models, methodologies and means. As a case study, the problem of hidden faults caused by the growth, the deficit of resources, and the checkability of digital circuits in critical applications is analyzed.
Mixed-criticality scheduling theory (MCSh) was developed to allow for more resource-efficient implementation of systems comprising different components that need to have their correctness validated at different levels of assurance. As originally defined, MCSh deals exclusively with pre-runtime verification of such systems; hence many mixed-criticality scheduling algorithms that have been developed tend to exhibit rather poor survivability characteristics during run-time. (E.g., MCSh allows for less-important (“Lo-criticality”) workloads to be completely discarded in the event that run-time behavior is not compliant with the assumptions under which the correctness of the LO-criticality workload should be verified.) Here we seek to extend MCSh to incorporate survivability considerations, by proposing quantitative metrics for the robustness and resilience of mixed-criticality scheduling algorithms. Such metrics allow us to make quantitative assertions regarding the survivability characteristics of mixed-criticality scheduling algorithms, and to compare different algorithms from the perspective of their survivability. We propose that MCSh seek to develop scheduling algorithms that possess superior survivability characteristics, thereby obtaining algorithms with better survivability properties than current ones (which, since they have been developed within a survivability-agnostic framework, tend to focus exclusively on pre-runtime verification and ignore survivability issues entirely).
We propose a serverless computing mechanism for distributed computation based on polar codes. Serverless computing is an emerging cloud based computation model that lets users run their functions on the cloud without provisioning or managing servers. Our proposed approach is a hybrid computing framework that carries out computationally expensive tasks such as linear algebraic operations involving large-scale data using serverless computing and does the rest of the processing locally. We address the limitations and reliability issues of serverless platforms such as straggling workers using coding theory, drawing ideas from recent literature on coded computation. The proposed mechanism uses polar codes to ensure straggler-resilience in a computationally effective manner. We provide extensive evidence showing polar codes outperform other coding methods. We have designed a sequential decoder specifically for polar codes in erasure channels with full-precision input and outputs. In addition, we have extended the proposed method to the matrix multiplication case where both matrices being multiplied are coded. The proposed coded computation scheme is implemented for AWS Lambda. Experiment results are presented where the performance of the proposed coded computation technique is tested in optimization via gradient descent. Finally, we introduce the idea of partial polarization which reduces the computational burden of encoding and decoding at the expense of straggler-resilience.
This article presents the valuable experience and practical results of exploratory research by authors on the scientific problem of cyber-resilient (Cyber Resilience) critical information infrastructure in the previously unknown heterogeneous mass cyber attacks of attackers based on similarity invariants. It is essential that the results obtained significantly complement the well-known practices and recommendations of ISO 22301 (https://www.iso.org), MITER PR 15-1334 (www.mitre.org) and NIST SP 800-160 (www.nist.gov) in terms of developing quantitative metrics and cyber resistance measures. This allows you to open and formally present the ultimate law of the effectiveness of ensuring the cyber stability of modern systems of Industry 4.0. in the face of growing security threats.
As an emerging paradigm for energy-efficiency design, approximate computing can reduce power consumption through simplification of logic circuits. Although calculation errors are caused by approximate computing, their impacts on the final results can be negligible in some error resilient applications, such as Convolutional Neural Networks (CNNs). Therefore, approximate computing has been applied to CNNs to reduce the high demand for computing resources and energy. Compared with the traditional method such as reducing data precision, this paper investigates the effect of approximate computing on the accuracy and power consumption of CNNs. To optimize the approximate computing technology applied to CNNs, we propose a method for quantifying the error resilience of each neuron by theoretical analysis and observe that error resilience varies widely across different neurons. On the basic of quantitative error resilience, dynamic adaptation of approximate bit-width and the corresponding configurable adder are proposed to fully exploit the error resilience of CNNs. Experimental results show that the proposed method further improves the performance of power consumption while maintaining high accuracy. By adopting the optimal approximate bit-width for each layer found by our proposed algorithm, dynamic adaptation of approximate bit-width reduces power consumption by more than 30% and causes less than 1% loss of the accuracy for LeNet-5.
With the popularity of smart devices and the widespread use of the Wi-Fi-based indoor localization, edge computing is becoming the mainstream paradigm of processing massive sensing data to acquire indoor localization service. However, these data which were conveyed to train the localization model unintentionally contain some sensitive information of users/devices, and were released without any protection may cause serious privacy leakage. To solve this issue, we propose a lightweight differential privacy-preserving mechanism for the edge computing environment. We extend ε-differential privacy theory to a mature machine learning localization technology to achieve privacy protection while training the localization model. Experimental results on multiple real-world datasets show that, compared with the original localization technology without privacy-preserving, our proposed scheme can achieve high accuracy of indoor localization while providing differential privacy guarantee. Through regulating the value of ε, the data quality loss of our method can be controlled up to 8.9% and the time consumption can be almost negligible. Therefore, our scheme can be efficiently applied in the edge networks and provides some guidance on indoor localization privacy protection in the edge computing.
Internet of Things (IoT) is an evolving research area for the last two decades. The integration of the IoT and social networking concept results in developing an interdisciplinary research area called the Social Internet of Things (SIoT). The SIoT is dominant over the traditional IoT because of its structure, implementation, and operational manageability. In the SIoT, devices interact with each other independently to establish a social relationship for collective goals. To establish trustworthy relationships among the devices significantly improves the interaction in the SIoT and mitigates the phenomenon of risk. The problem is to choose a trustworthy node who is most suitable according to the choice parameters of the node. The best-selected node by one node is not necessarily the most suitable node for other nodes, as the trustworthiness of the node is independent for everyone. We employ some theoretical characterization of the soft-set theory to deal with this kind of decision-making problem. In this paper, we developed a weighted based trustworthiness ranking model by using soft set theory to evaluate the trustworthiness in the SIoT. The purpose of the proposed research is to reduce the risk of fraudulent transactions by identifying the most trusted nodes.
Software Defined Networking (SDN) provides new functionalities to efficiently manage the network traffic, which can be used to enhance the networking capabilities to support the growing communication demands today. But at the same time, it introduces new attack vectors that can be exploited by attackers. Hence, evaluating and selecting countermeasures to optimize the security of the SDN is of paramount importance. However, one should also take into account the trade-off between security and performance of the SDN. In this paper, we present a security optimization approach for the SDN taking into account the trade-off between security and performance. We evaluate the security of the SDN using graphical security models and metrics, and use queuing models to measure the performance of the SDN. Further, we use Genetic Algorithms, namely NSGA-II, to optimally select the countermeasure with performance and security constraints. Our experimental analysis results show that the proposed approach can efficiently compute the countermeasures that will optimize the security of the SDN while satisfying the performance constraints.
An improved algorithm of the Analytic Hierarchy Process (AHP) is proposed in this paper, which is realized by constructing an improved judgment matrix. Specifically, rough set theory is used in the algorithm to calculate the weight of the network metric data, and then the improved AHP algorithm nine-point systemic is structured, finally, an improved AHP judgment matrix is constructed. By performing an AHP operation on the improved judgment matrix, the weight of the improved network metric data can be obtained. If only the rough set theory is applied to process the network index data, the objective factors would dominate the whole process. If the improved algorithm of AHP is used to integrate the expert score into the process of measurement, then the combination of subjective factors and objective factors can be realized. Based on the aforementioned theory, a new network attack metrics system is proposed in this paper, which uses a metric structure based on "attack type-attack attribute-attack atomic operation-attack metrics", in which the metric process of attack attribute adopts AHP. The metrics of the system are comprehensive, given their judgment of frequent attacks is universal. The experiment was verified by an experiment of a common attack Smurf. The experimental results show the effectiveness and applicability of the proposed measurement system.