Biblio
Power communication network is an important infrastructure of power system. For a large number of widely distributed business terminals and communication terminals. The data protection is related to the safe and stable operation of the whole power grid. How to solve the problem that lots of nodes need a large number of keys and avoid the situation that these nodes cannot exchange information safely because of the lack of keys. In order to solve the problem, this paper proposed a segmentation and combination technology based on quantum key to extend the limited key. The basic idea was to obtain a division scheme according to different conditions, and divide a key into several different sub-keys, and then combine these key segments to generate new keys and distribute them to different terminals in the system. Sufficient keys were beneficial to key updating, and could effectively enhance the ability of communication system to resist damage and intrusion. Through the analysis and calculation, the validity of this method in the use of limited quantum keys to achieve the business data secure transmission of a large number of terminal was further verified.
The continuing decrease in feature size of integrated circuits, and the increase of the complexity and cost of design and fabrication has led to outsourcing the design and fabrication of integrated circuits to third parties across the globe, and in turn has introduced several security vulnerabilities. The adversaries in the supply chain can pirate integrated circuits, overproduce these circuits, perform reverse engineering, and/or insert hardware Trojans in these circuits. Developing countermeasures against such security threats is highly crucial. Accordingly, this paper first develops a learning-based trust verification framework to detect hardware Trojans. To tackle Trojan insertion, IP piracy and overproduction, logic locking schemes and in particular stripped functionality logic locking is discussed and its resiliency against the state-of-the-art attacks is investigated.
The smart grid is a complex cyber-physical system (CPS) that poses challenges related to scale, integration, interoperability, processes, governance, and human elements. The US National Institute of Standards and Technology (NIST) and its government, university and industry collaborators, developed an approach, called CPS Framework, to reasoning about CPS across multiple levels of concern and competency, including trustworthiness, privacy, reliability, and regulatory. The approach uses ontology and reasoning techniques to achieve a greater understanding of the interdependencies among the elements of the CPS Framework model applied to use cases. This paper demonstrates that the approach extends naturally to automated and manual decision-making for smart grids: we apply it to smart grid use cases, and illustrate how it can be used to analyze grid topologies and address concerns about the smart grid. Smart grid stakeholders, whose decision making may be assisted by this approach, include planners, designers and operators.
Authorship attribution is the problem of studying an anonymous text and finding the corresponding author in a set of candidate authors. In this paper, we propose a method based on N-grams model for the problem of authorship attribution. Several measures are used to assign an anonymous text to an author. The different variants of the proposed method are implemented and validated on PAN benchmarks. The numerical results are encouraging and demonstrate the benefit of the proposed idea.
Searchable symmetric encryption (SSE) scheme allows a data owner to perform search queries over encrypted documents using symmetric cryptography. SSE schemes are useful in cloud storage and data outsourcing. Most of the SSE schemes in existing literature have been proved to leak a substantial amount of information that can lead to an inference attack. This paper presents, a novel leakage resilient searchable symmetric encryption with periodic updation (LRSSEPU) scheme that minimizes extra information leakage, and prevents an untrusted cloud server from performing document mapping attack, query recovery attack and other inference attacks. In particular, the size of the keyword vector is fixed and the keywords are periodically permuted and updated to achieve minimum leakage. Furthermore, our proposed LRSSEPU scheme provides authentication of the query messages and restricts an adversary from performing a replay attack, forged query attack and denial of service attack. We employ a combination of identity-based cryptography (IBC) with symmetric key cryptography to reduce the computation cost and communication overhead. Our scheme is lightweight and easy to implement with very little communication overhead.
Federated learning (shorted as FL) recently proposed by Google is a privacy-preserving method to integrate distributed data trainers. FL is extremely useful due to its ensuring privacy, lower latency, less power consumption and smarter models, but it could fail if multiple trainers abort training or send malformed messages to its partners. Such misbehavior are not auditable and parameter server may compute incorrectly due to single point failure. Furthermore, FL has no incentive to attract sufficient distributed training data and computation power. In this paper, we propose FLChain to build a decentralized, public auditable and healthy FL ecosystem with trust and incentive. FLChain replace traditional FL parameter server whose computation result must be consensual on-chain. Our work is not trivial when it is vital and hard to provide enough incentive and deterrence to distributed trainers. We achieve model commercialization by providing a healthy marketplace for collaborative-training models. Honest trainer can gain fairly partitioned profit from well-trained model according to its contribution and the malicious can be timely detected and heavily punished. To reduce the time cost of misbehavior detecting and model query, we design DDCBF for accelerating the query of blockchain-documented information. Finally, we implement a prototype of our work and measure the cost of various operations.
Software-Defined Network's (SDN) core working depends on the centralized controller which implements the control plane. With the help of this controller, security threats like Distributed Denial of Service (DDoS) attacks can be identified easily. A DDoS attack is usually instigated on servers by sending a huge amount of unwanted traffic that exhausts its resources, denying their services to genuine users. Earlier research work has been carried out to mitigate DDoS attacks at the switch and the host level. Mitigation at switch level involves identifying the switch which sends a lot of unwanted traffic in the network and blocking it from the network. But this solution is not feasible as it will also block genuine hosts connected to that switch. Later mitigation at the host level was introduced wherein the compromised hosts were identified and blocked thereby allowing genuine hosts to send their traffic in the network. Though this solution is feasible, it will block the traffic from the genuine applications of the compromised host as well. In this paper, we propose a new way to identify and mitigate the DDoS attack at the application level so that only the application generating the DDoS traffic is blocked and other genuine applications are allowed to send traffic in the network normally.
In this paper we present a method based on linear programming that facilitates reliable safety verification of hybrid dynamical systems over the infinite time horizon subject to perturbation inputs. The verification algorithm applies the probably approximately correct (PAC) learning framework and consequently can be regarded as statistically formal verification in the sense that it provides formal safety guarantees expressed using error probabilities and confidences. The safety of hybrid systems in this framework is verified via the computation of so-called PAC barrier certificates, which can be computed by solving a linear programming problem. Based on scenario approaches, the linear program is constructed by a family of independent and identically distributed state samples. In this way we can conduct verification of hybrid dynamical systems that existing methods are not capable of dealing with. Some preliminary experiments demonstrate the performance of our approach.
Internet of Things (IoT) has an immense potential for a plethora of applications ranging from healthcare automation to defence networks and the power grid. The security of an IoT network is essentially paramount to the security of the underlying computing and communication infrastructure. However, due to constrained resources and limited computational capabilities, IoT networks are prone to various attacks. Thus, safeguarding the IoT network from adversarial attacks is of vital importance and can be realised through planning and deployment of effective security controls; one such control being an intrusion detection system. In this paper, we present a novel intrusion detection scheme for IoT networks that classifies traffic flow through the application of deep learning concepts. We adopt a newly published IoT dataset and generate generic features from the field information in packet level. We develop a feed-forward neural networks model for binary and multi-class classification including denial of service, distributed denial of service, reconnaissance and information theft attacks against IoT devices. Results obtained through the evaluation of the proposed scheme via the processed dataset illustrate a high classification accuracy.
ASA systems (firewall, IDS, IPS) are probable to become communication bottlenecks in networks with growing network bandwidths. To alleviate this issue, we suggest to use Application-aware mechanism based on Deep Packet Inspection (DPI) to bypass chosen traffic around firewalls. The services of Internet video sharing gained importance and expanded their share of the multimedia market. The Internet video should meet strict service quality (QoS) criteria to make the broadcasting of broadcast television a viable and comparable level of quality. However, since the Internet video relies on packet communication, it is subject to delays, transmission failures, loss of data and bandwidth restrictions that may have a catastrophic effect on the quality of multimedia.
Trusted Execution Environments (TEEs) provide hardware support to isolate the execution of sensitive operations on mobile phones for improved security. However, they are not always available to use for application developers. To provide a consistent user experience to those who have and do not have a TEE-enabled device, we could get help from Open-TEE, an open-source GlobalPlatform (GP)-compliant software TEE emulator. However, Open-TEE does not offer any of the security properties hardware TEEs have. In this paper, we propose WhiteBox-TEE which integrates white-box cryptography with Open-TEE to provide better security while still remaining complaint with GP TEE specifications. We discuss the architecture, provisioning mechanism, implementation highlights, security properties and performance issues of WhiteBox-TEE and propose possible revisions to TEE specifications to have better use of white-box cryptography in software-only TEEs.
The huge volume, variety, and velocity of big data have empowered Machine Learning (ML) techniques and Artificial Intelligence (AI) systems. However, the vast portion of data used to train AI systems is sensitive information. Hence, any vulnerability has a potentially disastrous impact on privacy aspects and security issues. Nevertheless, the increased demands for high-quality AI from governments and companies require the utilization of big data in the systems. Several studies have highlighted the threats of big data on different platforms and the countermeasures to reduce the risks caused by attacks. In this paper, we provide an overview of the existing threats which violate privacy aspects and security issues inflicted by big data as a primary driving force within the AI/ML workflow. We define an adversarial model to investigate the attacks. Additionally, we analyze and summarize the defense strategies and countermeasures of these attacks. Furthermore, due to the impact of AI systems in the market and the vast majority of business sectors, we also investigate Standards Developing Organizations (SDOs) that are actively involved in providing guidelines to protect the privacy and ensure the security of big data and AI systems. Our far-reaching goal is to bridge the research and standardization frame to increase the consistency and efficiency of AI systems developments guaranteeing customer satisfaction while transferring a high degree of trustworthiness.