Biblio
In this paper we consider the threat surface and security of air gapped wallet schemes for permissioned blockchains as preparation for a Markov based mathematical model, and quantify the risk associated with private key leakage. We identify existing threats to the wallet scheme and existing work done to both attack and secure the scheme. We provide an overview the proposed model and outline justification for our methods. We follow with next steps in our remaining work and the overarching goals and motivation for our methods.
The rate at which a secure key can be generated in a quantum key distribution (QKD) protocol is limited by the channel loss and the quantum bit-error rate (QBER). Increases to the QBER can stem from detector noise, channel noise, or the presence of an eavesdropper, Eve. Eve is capable of obtaining information of the unsecure key by performing an attack on the quantum channel or by listening to all discussion performed via a noiseless public channel. Conventionally a QKD protocol will perform the information reconciliation over the authenticated public channel, revealing the parity bits used to correct for any quantum bit errors. In this invited paper, the possibility of limiting the information revealed to Eve during the information reconciliation is considered. Using a covert communication channel for the transmission of the parity bits, secure key rates are possible at much higher QBERs. This is demonstrated through the simulation of a polarization based QKD system implementing the BB84 protocol, showing significant improvement of the SKRs over the conventional QKD protocols.
To be prepared against cyberattacks, most organizations resort to security information and event management systems to monitor their infrastructures. These systems depend on the timeliness and relevance of the latest updates, patches and threats provided by cyberthreat intelligence feeds. Open source intelligence platforms, namely social media networks such as Twitter, are capable of aggregating a vast amount of cybersecurity-related sources. To process such information streams, we require scalable and efficient tools capable of identifying and summarizing relevant information for specified assets. This paper presents the processing pipeline of a novel tool that uses deep neural networks to process cybersecurity information received from Twitter. A convolutional neural network identifies tweets containing security-related information relevant to assets in an IT infrastructure. Then, a bidirectional long short-term memory network extracts named entities from these tweets to form a security alert or to fill an indicator of compromise. The proposed pipeline achieves an average 94% true positive rate and 91% true negative rate for the classification task and an average F1-score of 92% for the named entity recognition task, across three case study infrastructures.
Trust is known to be a key component in human social relationships. It is trust that defines human behavior with others to a large extent. Generative models have been extensively used in social networks study to simulate different characteristics and phenomena in social graphs. In this work, an attempt is made to understand how trust in social graphs can be combined with generative modeling techniques to generate trust-based social graphs. These generated social graphs are then compared with the original social graphs to evaluate how trust helps in generative modeling. Two well-known social network data sets i.e. the soc-Bitcoin and the wiki administrator network data sets are used in this work. Social graphs are generated from these data sets and then compared with the original graphs along with other standard generative modeling techniques to see how trust is a good component in this. Other Generative modeling techniques have been available for a while but this investigation with the real social graph data sets validate that trust can be an important factor in generative modeling.
We propose a distributed machine-learning architecture to predict trustworthiness of sensor services in Mobile Edge Computing (MEC) based Internet of Things (IoT) services, which aligns well with the goals of MEC and requirements of modern IoT systems. The proposed machine-learning architecture models training a distributed trust prediction model over a topology of MEC-environments as a Network Lasso problem, which allows simultaneous clustering and optimization on large-scale networked-graphs. We then attempt to solve it using Alternate Direction Method of Multipliers (ADMM) in a way that makes it suitable for MEC-based IoT systems. We present analytical and simulation results to show the validity and efficiency of the proposed solution.
Several efforts are currently active in dealing with scenarios combining fog, cloud computing, out of which a significant proportion is devoted to control, and manage the resulting scenario. Certainly, although many challenging aspects must be considered towards the design of an efficient management solution, it is with no doubt that whatever the solution is, the quality delivered to the users when executing services and the security guarantees provided to the users are two key aspects to be considered in the whole design. Unfortunately, both requirements are often non-convergent, thus making a solution suitably addressing both aspects is a challenging task. In this paper, we propose a decoupled transversal security strategy, referred to as DCF, as a novel architectural oriented policy handling the QoS-Security trade-off, particularly designed to be applied to combined fog-to-cloud systems, and specifically highlighting its impact on the delivered QoS.
Fog computing extends cloud computing technology to the edge of the infrastructure to support dynamic computation for IoT applications. Reduced latency and location awareness in objects' data access is attained by displacing workloads from the central cloud to edge devices. Doing so, it reduces raw data transfers from target objects to the central cloud, thus overcoming communication bottlenecks. This is a key step towards the pervasive uptake of next generation IoT-based services. In this work we study efficient orchestration of applications in fog computing, where a fog application is the cascade of a cloud module and a fog module. The problem results into a mixed integer non linear optimisation. It involves multiple constraints due to computation and communication demands of fog applications, available infrastructure resources and it accounts also the location of target IoT objects. We show that it is possible to reduce the complexity of the original problem with a related placement formulation, which is further solved using a greedy algorithm. This algorithm is the core placement logic of FogAtlas, a fog computing platform based on existing virtualization technologies. Extensive numerical results validate the model and the scalability of the proposed algorithm, showing performance close to the optimal solution with respect to the number of served applications.
The Named Data Network (NDN) is a promising network paradigm for content distribution based on caching. However, it may put consumer privacy at risk, as the adversary may identify the content, the name and the signature (namely a certificate) through side-channel timing responses from the cache of the routers. The adversary may identify the content name and the consumer node by distinguishing between cached and un- cached contents. In order to mitigate the timing attack, effective countermeasure methods have been proposed by other authors, such as random caching, random freshness, and probabilistic caching. In this work, we have implemented a timing attack scenario to evaluate the efficiency of these countermeasures and to demonstrate how the adversary can be detected. For this goal, a brute force timing attack scenario based on a real topology was developed, which is the first brute force attack model applied in NDN. Results show that the adversary nodes can be effectively distinguished from other legitimate consumers during the attack period. It is also proposed a multi-level mechanism to detect an adversary node. Through this approach, the content distribution performance can be mitigated against the attack.
Upon the new paradigm of Cellular Internet of Things, through the usage of technologies such as Narrowband IoT (NB-IoT), a massive amount of IoT devices will be able to use the mobile network infrastructure to perform their communications. However, it would be beneficial for these devices to use the same security mechanisms that are present in the cellular network architecture, so that their connections to the application layer could see an increase on security. As a way to approach this, an identity management and provisioning mechanism, as well as an identity federation between an IoT platform and the cellular network is proposed as a way to make an IoT device deemed worthy of using the cellular network and perform its actions.
Naturally Grover is best at detecting its own fake articles, since in a way the agent knows its own processes. But it can also detect those made by other models, such as OpenAI's GPT2, with high accuracy.
Internet of Things is nowadays growing faster than ever before. Operators are planning or already creating dedicated networks for this type of devices. There is a need to create dedicated solutions for this type of network, especially solutions related to information security. In this article we present a mechanism of security-aware routing, which takes into account the evaluation of trust in devices and packet flows. We use trust relationships between flows and network nodes to create secure SDN paths, not ignoring also QoS and energy criteria. The system uses SDN infrastructure, enriched with Cognitive Packet Networks (CPN) mechanisms. Routing decisions are made by Random Neural Networks, trained with data fetched with Cognitive Packets. The proposed network architecture, implementing the security-by-design concept, was designed and is being implemented within the SerIoT project to demonstrate secure networks for the Internet of Things (IoT).
Attacks on cloud-computing services are becoming more prevalent with recent victims including Tesla, Aviva Insurance and SIM-card manufacturer Gemalto[1]. The risk posed to organisations from malicious insiders is becoming more widely known about and consequently many are now investing in hardware, software and new processes to try to detect these attacks. As for all types of attack vector, there will always be those which are not known about and those which are known about but remain exceptionally difficult to detect - particularly in a timely manner. We believe that insider attacks are of particular concern in a cloud-computing environment, and that cloud-service providers should enhance their ability to detect them by means of indirect detection. We propose a combined attack-tree and kill-chain based method for identifying multiple indirect detection measures. Specifically, the use of attack trees enables us to encapsulate all detection opportunities for insider attacks in cloud-service environments. Overlaying the attack tree on top of a kill chain in turn facilitates indirect detection opportunities higher-up the tree as well as allowing the provider to determine how far an attack has progressed once suspicious activity is detected. We demonstrate the method through consideration of a specific type of insider attack - that of attempting to capture virtual machines in transit within a cloud cluster via use of a network tap, however, the process discussed here applies equally to all cloud paradigms.
In recent years, almost all the real-world operations are transferred to cyber world and these market computers connect with each other via Internet. As a result of this, there is an increasing number of security breaches of the networks, whose admins cannot protect their networks from the all types of attacks. Although most of these attacks can be prevented with the use of firewalls, encryption mechanisms, access controls and some password protections mechanisms; due to the emergence of new type of attacks, a dynamic intrusion detection mechanism is always needed in the information security market. To enable the dynamicity of the Intrusion Detection System (IDS), it should be updated by using a modern learning mechanism. Neural Network approach is one of the mostly preferred algorithms for training the system. However, with the increasing power of parallel computing and use of big data for training, as a new concept, deep learning has been used in many of the modern real-world problems. Therefore, in this paper, we have proposed an IDS system which uses GPU powered Deep Learning Algorithms. The experimental results are collected on mostly preferred dataset KDD99 and it showed that use of GPU speed up training time up to 6.48 times depending on the number of the hidden layers and nodes in them. Additionally, we compare the different optimizers to enlighten the researcher to select the best one for their ongoing or future research.