Biblio
The principal mission of Multi-Source Multicast (MSM) is to disseminate all messages from all sources in a network to all destinations. MSM is utilized in numerous applications. In many of them, securing the messages disseminated is critical. A common secure model is to consider a network where there is an eavesdropper which is able to observe a subset of the network links, and seek a code which keeps the eavesdropper ignorant regarding all the messages. While this is solved when all messages are located at a single source, Secure MSM (SMSM) is an open problem, and the rates required are hard to characterize in general. In this paper, we consider Individual Security, which promises that the eavesdropper has zero mutual information with each message individually. We completely characterize the rate region for SMSM under individual security, and show that such a security level is achievable at the full capacity of the network, that is, the cut-set bound is the matching converse, similar to non-secure MSM. Moreover, we show that the field size is similar to non-secure MSM and does not have to be larger due to the security constraint.
Distributed storage systems and caching systems are becoming widespread, and this motivates the increasing interest on assessing their achievable performance in terms of reliability for legitimate users and security against malicious users. While the assessment of reliability takes benefit of the availability of well established metrics and tools, assessing security is more challenging. The classical cryptographic approach aims at estimating the computational effort for an attacker to break the system, and ensuring that it is far above any feasible amount. This has the limitation of depending on attack algorithms and advances in computing power. The information-theoretic approach instead exploits capacity measures to achieve unconditional security against attackers, but often does not provide practical recipes to reach such a condition. We propose a mixed cryptographic/information-theoretic approach with a twofold goal: estimating the levels of information-theoretic security and defining a practical scheme able to achieve them. In order to find optimal choices of the parameters of the proposed scheme, we exploit an effective probabilistic model checker, which allows us to overcome several limitations of more conventional methods.
As the development of technology increases, the security risk also increases. This has affected most organizations, irrespective of size, as they depend on the increasingly pervasive technology to perform their daily tasks. However, the dependency on technology has introduced diverse security vulnerabilities in organizations which requires a reliable preparedness for probable forensic investigation of the unauthorized incident. Keystroke dynamics is one of the cost-effective methods for collecting potential digital evidence. This paper presents a keystroke pattern analysis technique suitable for the collection of complementary potential digital evidence for forensic readiness. The proposition introduced a technique that relies on the extraction of reliable behavioral signature from user activity. Experimental validation of the proposition demonstrates the effectiveness of proposition using a multi-scheme classifier. The overall goal is to have forensically sound and admissible keystroke evidence that could be presented during the forensic investigation to minimize the costs and time of the investigation.
The vision of cyber-physical systems (CPSs) considered the Internet as the future communication network for such systems. A challenge with this regard is to provide high communication reliability, especially, for CPSs applications in critical infrastructures. Examples include smart grid applications with reliability requirements between 99-99.9999% [2]. Even though the Internet is a cost effective solution for such applications, the reliability of its end-to-end (e2e) paths is inadequate (often less than 99%). In this paper, we propose Reliable Multipath Communication Approach for Internet-based CPSs (RC4CPS). RC4CPS is an e2e approach that utilizes the inherent redundancy of the Internet and multipath (MP) transport protocols concept to improve reliability measured in terms of availability. It provides online monitoring and MP selection in order to fulfill the application specific reliability requirement. In addition, our MP selection considers e2e paths dependency and unavailability prediction to maximize the reliability gains of MP communication. Our results show that RC4CPS dynamic MP selection satisfied the reliability requirement along with selecting e2e paths with low dependency and unavailability probability.
the terms Smart grid, IntelliGrid, and secure astute grid are being used today to describe technologies that automatically and expeditiously (separate far from others) faults, renovate potency, monitor demand, and maintain and recuperate (firm and steady nature/lasting nature/vigor) for more reliable generation, transmission, and distribution of electric potency. In general, the terms describe the utilization of microprocessor-predicated astute electronic contrivances (IEDs) communicating with one another to consummate tasks afore now done by humans or left undone. These IEDs watch/ notice/ celebrate/ comply with the state of the puissance system, make edified decisions, and then take action to preserve the (firm and steady nature/lasting nature/vigor) and performance of the grid. Technology use/military accommodation in the home will sanction end users to manage their consumption predicated on their own predilections. In order to manage their consumption or the injuctive authorization placed on the grid, people (who utilize a product or accommodation) need information and an (able to transmute and get better) power distribution system. The astute grid is an accumulation of information sources and the automatic control system that manages the distribution of puissance, understands the transmutations in demand, and reacts to it by managing demand replication. Different billing (prosperity plans/ways of reaching goals) for mutable time and type of avail, as well as conservation and use or sale of distributed utilizable things/valuable supplies, will become part of perspicacious solutions. The traditional electrical power grid is currently evolving into the perspicacious grid. Perspicacious grid integrates the traditional electrical power grid with information and communication technologies (ICT). Such integration empowers the electrical utilities providers and consumers, amends the efficiency and the availability of the puissance system while perpetually monitoring, - ontrolling and managing the authoritative ordinances of customers. A keenly intellective grid is an astronomically immense intricate network composed of millions of contrivances and entities connected with each other. Such a massive network comes with many security concerns and susceptibilities. In this paper, we survey the latest on keenly intellective grid security. We highlight the involution of the keenly intellective grid network and discuss the susceptibilities concrete to this sizably voluminous heterogeneous network. We discuss then the challenges that subsist in securing the keenly intellective grid network and how the current security solutions applied for IT networks are not adequate to secure astute grid networks. We conclude by over viewing the current and needed security solutions for the keenly intellective gird.
There has been a growing spate of Cyber attacks targeted at different corporate enterprises and systems across the globe. The scope of these attacks spans from small scale (grid and control system manipulation, domestic meter cyber hacking etc) to large scale distributed denial of service attacks (DDoSA) in enterprise networks. The effect of hacking on control systems through distributed control systems (DCS) using communication protocols on vulnerable home area networks (HANs) and neighborhood area networks (NANs) is terrifying. To meet the current security requirements, a new security network is proposed called Smart grid convoluted network (SGCN). With SGCN, the basic activities of data processing, monitoring and query requests are implemented outside the grid using Fog computing layer-3 devices (gatekeepers). A cyber monitor agent that leverages a reliable end-to end-communication network to secure the systems components on the grid is employed. Cyber attacks which affects the computational requirements of SG applications is mitigated by using a Fourier predictive cyber monitor (FPCM). The network uses flexible resources with loopback services shared across the network. Serial parallelism and efficient bandwidth provisioning are used by the locally supported Fog nodes within the SG cloud space. For services differentiation, SGCN employed secure communication between its various micro-grids as well as its metering front-ends. With the simulated traffic payload extraction trend (STPET), SGCN promises hard time for hackers and malicious malwares. While the work guarantees security for SGs, reliability is still an open issue due to the complexity of SG architecture. In conclusion, the future of the Cyber security in SGs must employ the concept of Internet of Everything (IoE), Malware predictive analytics and Fog layers on existing SG prototypes for optimal security benefits.
The implementation of RFID technology in computer systems gives access to quality information on the location or object tracking in real time, thereby improving workflow and lead to safer, faster and better business decisions. This paper discusses the quantitative indicators of the quality of the computer system supported by RFID technology applied in monitoring facilities (pallets, packages and people) marked with RFID tag. Results of analysis of quantitative indicators of quality compute system supported by RFID technology are presented in tables.
As a consequence of the recent development of situational awareness technologies for smart grids, the gathering and analysis of data from multiple sources offer a significant opportunity for enhanced fault diagnosis. In order to achieve improved accuracy for both fault detection and classification, a novel combined data analytics technique is presented and demonstrated in this paper. The proposed technique is based on a segmented approach to Bayesian modelling that provides probabilistic graphical representations of both electrical power and data communication networks. In this manner, the reliability of both the data communication and electrical power networks are considered in order to improve overall power system transmission line fault diagnosis.
With the accelerated iteration of technological innovation, blockchain has rapidly become one of the hottest Internet technologies in recent years. As a decentralized and distributed data management solution, blockchain has restored the definition of trust by the embedded cryptography and consensus mechanism, thus providing security, anonymity and data integrity without the need of any third party. But there still exists some technical challenges and limitations in blockchain. This paper has conducted a systematic research on current blockchain application in cybersecurity. In order to solve the security issues, the paper analyzes the advantages that blockchain has brought to cybersecurity and summarizes current research and application of blockchain in cybersecurity related areas. Through in-depth analysis and summary of the existing work, the paper summarizes four major security issues of blockchain and performs a more granular analysis of each problem. Adopting an attribute-based encryption method, the paper also puts forward an enhanced access control strategy.
Situational awareness during sophisticated cyber attacks on the power grid is critical for the system operator to perform suitable attack response and recovery functions to ensure grid reliability. The overall theme of this paper is to identify existing practical issues and challenges that utilities face while monitoring substations, and to suggest potential approaches to enhance the situational awareness for the grid operators. In this paper, we provide a broad discussion about the various gaps that exist in the utility industry today in monitoring substations, and how those gaps could be addressed by identifying the various data sources and monitoring tools to improve situational awareness. The paper also briefly describes the advantages of contextualizing and correlating substation monitoring alerts using expert systems at the control center to obtain a holistic systems-level view of potentially malicious cyber activity at the substations before they cause impacts to grid operation.
Underwater acoustic networks is an enabling technology for a range of applications such as mine countermeasures, intelligence and reconnaissance. Common for these applications is a need for robust information distribution while minimizing energy consumption. In terrestrial wireless networks topology information is often used to enhance the efficiency of routing, in terms of higher capacity and less overhead. In this paper we asses the effects of topology information on routing in underwater acoustic networks. More specifically, the interplay between long propagation delays, contention-based channels access and dissemination of varying degrees of topology information is investigated. The study is based on network simulations of a number of network protocols that make use of varying amounts of topology information. The results indicate that, in the considered scenario, relying on local topology information to reduce retransmissions may have adverse effects on the reliability. The difficult channel conditions and the contention-based channels access methods create a need for an increased amount of diversity, i.e., more retransmissions. In the scenario considered, an opportunistic flooding approach is a better, both in terms of robustness and energy consumption.
This paper describes an experiment carried out to demonstrate robustness and trustworthiness of an orchestrated two-layer network test-bed (PROnet). A Robotic Operating System Industrial (ROS-I) distributed application makes use of end-to-end flow services offered by PROnet. The PROnet Orchestrator is used to provision reliable end-to-end Ethernet flows to support the ROS-I application required data exchange. For maximum reliability, the Orchestrator provisions network resource redundancy at both layers, i.e., Ethernet and optical. Experimental results show that the robotic application is not interrupted by a fiber outage.
The collaborative recommendation mechanism is beneficial for the subject in an open network to find efficiently enough referrers who directly interacted with the object and obtain their trust data. The uncertainty analysis to the collected trust data selects the reliable trust data of trustworthy referrers, and then calculates the statistical trust value on certain reliability for any object. After that the subject can judge its trustworthiness and further make a decision about interaction based on the given threshold. The feasibility of this method is verified by three experiments which are designed to validate the model's ability to fight against malicious service, the exaggeration and slander attack. The interactive success rate is significantly improved by using the new model, and the malicious entities are distinguished more effectively than the comparative model.
This presents a new model to support empirical failure probability estimation for a software-intensive system. The new element of the approach is that it combines the results of testing using a simulated hardware platform with results from testing on the real platform. This approach addresses a serious practical limitation of a technique known as statistical testing. This limitation will be called the test time expansion problem (or simply the 'time problem'), which is that the amount of testing required to demonstrate useful levels of reliability over a time period T is many orders of magnitude greater than T. The time problem arises whether the aim is to demonstrate ultra-high reliability levels for protection system, or to demonstrate any (desirable) reliability levels for continuous operation ('high demand') systems. Specifically, the theoretical feasibility of a platform simulation approach is considered since, if this is not proven, questions of practical implementation are moot. Subject to the assumptions made in the paper, theoretical feasibility is demonstrated.
Verifying that hardware design implementations adhere to specifications is a time intensive and sometimes intractable problem due to the massive size of the system's state space. Formal methods techniques can be used to prove certain tractable specification properties; however, they are expensive, and often require subject matter experts to develop and solve. Nonetheless, hardware verification is a critical process to ensure security and safety properties are met, and encapsulates problems associated with trust and reliability. For complex designs where coverage of the entire state space is unattainable, prioritizing regions most vulnerable to security or reliability threats would allow efficient allocation of valuable verification resources. Stackelberg security games model interactions between a defender, whose goal is to assign resources to protect a set of targets, and an attacker, who aims to inflict maximum damage on the targets after first observing the defender's strategy. In equilibrium, the defender has an optimal security deployment strategy, given the attacker's best response. We apply this Stackelberg security framework to synthesized hardware implementations using the design's network structure and logic to inform defender valuations and verification costs. The defender's strategy in equilibrium is thus interpreted as a prioritization of the allocation of verification resources in the presence of an adversary. We demonstrate this technique on several open-source synthesized hardware designs.
Security evaluation of diverse SDN frameworks is of significant importance to design resilient systems and deal with attacks. Focused on SDN scenarios, a game-theoretic model is proposed to analyze their security performance in existing SDN architectures. The model can describe specific traits in different structures, represent several types of information of players (attacker and defender) and quantitatively calculate systems' reliability. Simulation results illustrate dynamic SDN structures have distinct security improvement over static ones. Besides, effective dynamic scheduling mechanisms adopted in dynamic systems can enhance their security further.
We propose secure RAID, i.e., low-complexity schemes to store information in a distributed manner that is resilient to node failures and resistant to node eavesdropping. We generalize the concept of systematic encoding to secure RAID and show that systematic schemes have significant advantages in the efficiencies of encoding, decoding and random access. For the practical high rate regime, we construct three XOR-based systematic secure RAID schemes with optimal encoding and decoding complexities, from the EVENODD codes and B codes, which are array codes widely used in the RAID architecture. These schemes optimally tolerate two node failures and two eavesdropping nodes. For more general parameters, we construct efficient systematic secure RAID schemes from Reed-Solomon codes. Our results suggest that building “keyless”, information-theoretic security into the RAID architecture is practical.
Security challenges are the most important obstacles for the advancement of IT-based on-demand services and cloud computing as an emerging technology. Lack of coincidence in identity management models based on defined policies and various security levels in different cloud servers is one of the most challenging issues in clouds. In this paper, a policy- based user authentication model has been presented to provide a reliable and scalable identity management and to map cloud users' access requests with defined polices of cloud servers. In the proposed schema several components are provided to define access policies by cloud servers, to apply policies based on a structural and reliable ontology, to manage user identities and to semantically map access requests by cloud users with defined polices. Finally, the reliability and efficiency of this policy-based authentication schema have been evaluated by scientific performance, security and competitive analysis. Overall, the results show that this model has met defined demands of the research to enhance the reliability and efficiency of identity management in cloud computing environments.
We prove polarization theorems for arbitrary classical-quantum (cq) channels. The input alphabet is endowed with an arbitrary Abelian group operation and an Arikan-style transformation is applied using this operation. It is shown that as the number of polarization steps becomes large, the synthetic cq-channels polarize to deterministic homomorphism channels that project their input to a quotient group of the input alphabet. This result is used to construct polar codes for arbitrary cq-channels and arbitrary classical-quantum multiple access channels (cq-MAC). The encoder can be implemented in O(N log N) operations, where N is the blocklength of the code. A quantum successive cancellation decoder for the constructed codes is proposed. It is shown that the probability of error of this decoder decays faster than 2-Nβ for any β textless; ½.
In Wyner wiretap II model of communication, Alice and Bob are connected by a channel that can be eavesdropped by an adversary with unlimited computation who can select a fraction of communication to view, and the goal is to provide perfect information theoretic security. Information theoretic security is increasingly important because of the threat of quantum computers that can effectively break algorithms and protocols that are used in today's public key infrastructure. We consider interactive protocols for wiretap II channel with active adversary who can eavesdrop and add adversarial noise to the eavesdropped part of the codeword. These channels capture wireless setting where malicious eavesdroppers at reception distance of the transmitter can eavesdrop the communication and introduce jamming signal to the channel. We derive a new upperbound R ≤ 1 - ρ for the rate of interactive protocols over two-way wiretap II channel with active adversaries, and construct a perfectly secure protocol family with achievable rate 1 - 2ρ + ρ2. This is strictly higher than the rate of the best one round protocol which is 1 - 2ρ, hence showing that interaction improves rate. We also prove that even with interaction, reliable communication is possible only if ρ \textbackslashtextless; 1/2. An interesting aspect of this work is that our bounds will also hold in network setting when two nodes are connected by n paths, a ρ of which is corrupted by the adversary. We discuss our results, give their relations to the other works, and propose directions for future work.
Data assurance and resilience are crucial security issues in cloud-based IoT applications. With the widespread adoption of drones in IoT scenarios such as warfare, agriculture and delivery, effective solutions to protect data integrity and communications between drones and the control system have been in urgent demand to prevent potential vulnerabilities that may cause heavy losses. To secure drone communication during data collection and transmission, as well as preserve the integrity of collected data, we propose a distributed solution by utilizing blockchain technology along with the traditional cloud server. Instead of registering the drone itself to the blockchain, we anchor the hashed data records collected from drones to the blockchain network and generate a blockchain receipt for each data record stored in the cloud, reducing the burden of moving drones with the limit of battery and process capability while gaining enhanced security guarantee of the data. This paper presents the idea of securing drone data collection and communication in combination with a public blockchain for provisioning data integrity and cloud auditing. The evaluation shows that our system is a reliable and distributed system for drone data assurance and resilience with acceptable overhead and scalability for a large number of drones.
The past ten years has seen increasing calls to make security research more “scientific”. On the surface, most agree that this is desirable, given universal recognition of “science” as a positive force. However, we find that there is little clarity on what “scientific” means in the context of computer security research, or consensus on what a “Science of Security” should look like. We selectively review work in the history and philosophy of science and more recent work under the label “Science of Security”. We explore what has been done under the theme of relating science and security, put this in context with historical science, and offer observations and insights we hope may motivate further exploration and guidance. Among our findings are that practices on which the rest of science has reached consensus appear little used or recognized in security, and a pattern of methodological errors continues unaddressed.