Biblio
This research in progress paper describes the role of cyber security measures undertaken in an ICT system for integrating electric storage technologies into the grid. To do so, it defines security requirements for a communications gateway and gives detailed information and hands-on configuration advice on node and communication line security, data storage, coping with backend M2M communications protocols and examines privacy issues. The presented research paves the road for developing secure smart energy communications devices that allow enhancing energy efficiency. The described measures are implemented in an actual gateway device within the HORIZON 2020 project STORY, which aims at developing new ways to use storage and demonstrating these on six different demonstration sites.
Named Data Networks provide a clean-slate redesign of the Future Internet for efficient content distribution. Because Internet of Things are expected to compose a significant part of Future Internet, most content will be managed by constrained devices. Such devices are often equipped with limited CPU, memory, bandwidth, and energy supply. However, the current Named Data Networks design neglects the specific requirements of Internet of Things scenarios and many data structures need to be further optimized. The purpose of this research is to provide an efficient strategy to route in Named Data Networks by constructing a Forwarding Information Base using Iterated Bloom Filters defined as I(FIB)F. We propose the use of content names based on iterative hashes. This strategy leads to reduce the overhead of packets. Moreover, the memory and the complexity required in the forwarding strategy are lower than in current solutions. We compare our proposal with solutions based on hierarchical names and Standard Bloom Filters. We show how to further optimize I(FIB)F by exploiting the structure information contained in hierarchical content names. Finally, two strategies may be followed to reduce: (i) the overall memory for routing or (ii) the probability of false positives.
The symmetric block ciphers, which represent a core element for building cryptographic communications systems and protocols, are used in providing message confidentiality, authentication and integrity. Various limitations in hardware and software resources, especially in terminal devices used in mobile communications, affect the selection of appropriate cryptosystem and its parameters. In this paper, an implementation of three symmetric ciphers (DES, 3DES, AES) used in different operating modes are analyzed on Android platform. The cryptosystems' performance is analyzed in different scenarios using several variable parameters: cipher, key size, plaintext size and number of threads. Also, the influence of parallelization supported by multi-core CPUs on cryptosystem performance is analyzed. Finally, some conclusions about the parameter selection for optimal efficiency are given.
Volume of digital data is increasing at a faster rate and the security of the data is at risk while being transit on a network as well as at rest. The execution time of full disk encryption in large servers is significant because of the computational complexity associated with disk encryption. Hence it is necessary to reduce the execution time of full disk encryption from the application point of view. In this work a full disk encryption algorithm namely EME2 AES (Encrypt Mix Encrypt V2 Advanced Encryption Standard) is analyzed. The execution speed of this algorithm is reduced by means of multicore compatible parallel implementation which makes use of available cores. Parallel implementation is executed on a multicore machine with 8 cores and speed up on the multicore implementation is measured. Results show that the multicore implementation of EME2 AES using OpenMP is up to 2.85 times faster than sequential execution for the chosen infrastructure and data range.
Modern security protocols may involve humans in order to compare or copy short strings between different devices. Multi-factor authentication protocols, such as Google 2-factor or 3D-secure are typical examples of such protocols. However, such short strings may be subject to brute force attacks. In this paper we propose a symbolic model which includes attacker capabilities for both guessing short strings, and producing collisions when short strings result from an application of weak hash functions. We propose a new decision procedure for analysing (a bounded number of sessions of) protocols that rely on short strings. The procedure has been integrated in the AKISS tool and tested on protocols from the ISO/IEC 9798-6:2010 standard.
In Energy Internet mode, a large number of alarm information is generated when equipment exception and multiple faults in large power grid, which seriously affects the information collection, fault analysis and delays the accident treatment for the monitors. To this point, this paper proposed a method for power grid monitoring to monitor and diagnose fault in real time, constructed the equipment fault logical model based on five section alarm information, built the standard fault information set, realized fault information optimization, fault equipment location, fault type diagnosis, false-report message and missing-report message analysis using matching algorithm. The validity and practicality of the proposed method by an actual case was verified, which can shorten the time of obtaining and analyzing fault information, accelerate the progress of accident treatment, ensure the safe and stable operation of power grid.
In this paper we investigate whether and how hardware-based roots of trust, namely Trusted Platform Modules (TPMs) can improve the security of the communication protocol OPC UA (Open Platform Communications Unified Architecture) under reasonable assumptions, i.e. the Dolev-Yao attacker model. Our analysis shows that TPMs may serve for generating (RNG) and securely storing cryptographic keys, as cryptocoprocessors for weak systems, as well as for remote attestation. We propose to include these TPM functions into OPC UA via so-called ConformanceUnits, which can serve as building blocks of profiles that are used by clients and servers for negotiating the parameters of a session. Eventually, we present first results regarding the performance of a client-server communication including an additional OPC UA server providing remote attestation of other OPC UA servers.
The Trusted Platform Module (TPM) is an international standard for a security chip that can be used for the management of cryptographic keys and for remote attestation. The specification of the most recent TPM 2.0 interfaces for direct anonymous attestation unfortunately has a number of severe shortcomings. First of all, they do not allow for security proofs (indeed, the published proofs are incorrect). Second, they provide a Diffie-Hellman oracle w.r.t. the secret key of the TPM, weakening the security and preventing forward anonymity of attestations. Fixes to these problems have been proposed, but they create new issues: they enable a fraudulent TPM to encode information into an attestation signature, which could be used to break anonymity or to leak the secret key. Furthermore, all proposed ways to remove the Diffie-Hellman oracle either strongly limit the functionality of the TPM or would require significant changes to the TPM 2.0 interfaces. In this paper we provide a better specification of the TPM 2.0 interfaces that addresses these problems and requires only minimal changes to the current TPM 2.0 commands. We then show how to use the revised interfaces to build q-SDH-and LRSW-based anonymous attestation schemes, and prove their security. We finally discuss how to obtain other schemes addressing different use cases such as key-binding for U-Prove and e-cash.
Establishing and operating an Information Security Management System (ISMS) to protect information values and information systems is in itself a challenge for larger enterprises and small and medium sized businesses alike. A high level of automation is required to reduce operational efforts to an acceptable level when implementing an ISMS. In this paper we present the ADAMANT framework to increase automation in information security management as a whole by establishing a continuous risk-driven and context-aware ISMS that not only automates security controls but considers all highly interconnected information security management tasks. We further illustrate how ADAMANT is suited to establish an ISO 27001 compliant ISMS for small and medium-sized enterprises and how not only the monitoring of security controls but a majority of ISMS related activities can be supported through automated process execution and workflow enactment.
While the growth of cloud-based technologies has benefited the society tremendously, it has also increased the surface area for cyber attacks. Given that cloud services are prevalent today, it is critical to devise systems that detect intrusions. One form of security breach in the cloud is when cyber-criminals compromise Virtual Machines (VMs) of unwitting users and, then, utilize user resources to run time-consuming, malicious, or illegal applications for their own benefit. This work proposes a method to detect unusual resource usage trends and alert the user and the administrator in real time. We experiment with three categories of methods: simple statistical techniques, unsupervised classification, and regression. So far, our approach successfully detects anomalous resource usage when experimenting with typical trends synthesized from published real-world web server logs and cluster traces. We observe the best results with unsupervised classification, which gives an average F1-score of 0.83 for web server logs and 0.95 for the cluster traces.
IT system risk assessments are indispensable due to increasing cyber threats within our ever-growing IT systems. Moreover, laws and regulations urge organizations to conduct risk assessments regularly. Even though there exist several risk management frameworks and methodologies, they are in general high level, not defining the risk metrics, risk metrics values and the detailed risk assessment formulas for different risk views. To address this need, we define a novel risk assessment methodology specific to IT systems. Our model is quantitative, both asset and vulnerability centric and defines low and high level risk metrics. High level risk metrics are defined in two general categories; base and attack graph-based. In our paper, we provide a detailed explanation of formulations in each category and make our implemented software publicly available for those who are interested in applying the proposed methodology to their IT systems.
Increasing interest in cyber-physical systems with integrated computational and physical capabilities that can interact with humans can be identified in research and practice. Since these systems can be classified as safety- and security-critical systems the need for safety and security assurance and certification will grow. Moreover, these systems are typically characterized by fragmentation, interconnectedness, heterogeneity, short release cycles, cross organizational nature and high interference between safety and security requirements. These properties combined with the assurance of compliance to multiple standards, carrying out certification and re-certification, and the lack of an approach to model, document and integrate safety and security requirements represent a major challenge. In order to address this gap we developed a domain agnostic approach to model security and safety requirements in an integrated view to support certification processes during design and run-time phases of cyber-physical systems.
Cloud computing has become a widely used computing paradigm providing on-demand computing and storage capabilities based on pay-as-you-go model. Recently, many organizations, especially in the field of big data, have been adopting the cloud model to perform data analytics through leasing powerful Virtual Machines (VMs). VMs can be attractive targets to attackers as well as untrusted cloud providers who aim to get unauthorized access to the business critical-data. The obvious security solution is to perform data analytics on encrypted data through the use of cryptographic keys as that of the Advanced Encryption Standard (AES). However, it is very easy to obtain AES cryptographic keys from the VM's Random Access Memory (RAM). In this paper, we present a novel key-scattering (KS) approach to protect the cryptographic keys while encrypting/decrypting data. Our solution is highly portable and interoperable. Thus, it could be integrated within today's existing cloud architecture without the need for further modifications. The feasibility of the approach has been proven by implementing a functioning prototype. The evaluation results show that our approach is substantially more resilient to brute force attacks and key extraction tools than the standard AES algorithm, with acceptable execution time.
Intellectual property is inextricably linked to the innovative development of mass innovation spaces. The synthetic development of intellectual property and mass innovation spaces will fundamentally support the new economic model of “mass entrepreneurship and innovation”. As such, it is critical to explore intellectual property service standards for mass innovation spaces and to steer mass innovation spaces to the creation of an intellectual property service system catering to “makers”. In addition, it is crucial to explore intellectual cluster management innovations for mass innovation spaces.
Now a day, need for fast accessing of data is increasing with the exponential increase in the security field. QR codes have served as a useful tool for fast and convenient sharing of data. But with increased usage of QR Codes have become vulnerable to attacks such as phishing, pharming, manipulation and exploitation. These security flaws could pose a danger to an average user. In this paper we have proposed a way, called Secured QR (SQR) to fix all these issues. In this approach we secure a QR code with the help of a key in generator side and the same key is used to get the original information at scanner side. We have used AES algorithm for this purpose. SQR approach is applicable when we want to share/use sensitive information in the organization such as sharing of profile details, exchange of payment information, business cards, generation of electronic tickets etc.
The area of secure compilation aims to design compilers which produce hardened code that can withstand attacks from low-level co-linked components. So far, there is no formal correctness criterion for secure compilers that comes with a clear understanding of what security properties the criterion actually provides. Ideally, we would like a criterion that, if fulfilled by a compiler, guarantees that large classes of security properties of source language programs continue to hold in the compiled program, even as the compiled program is run against adversaries with low-level attack capabilities. This paper provides such a novel correctness criterion for secure compilers, called trace-preserving compilation (TPC). We show that TPC preserves a large class of security properties, namely all safety hyperproperties. Further, we show that TPC preserves more properties than full abstraction, the de-facto criterion used for secure compilation. Then, we show that several fully abstract compilers described in literature satisfy an additional, common property, which implies that they also satisfy TPC. As an illustration, we prove that a fully abstract compiler from a typed source language to an untyped target language satisfies TPC.
Nowadays the adoption of IoT solutions is gaining high momentum in several fields, including energy, home and environment monitoring, transportation, and manufacturing. However, cybersecurity attacks to low-cost end-user devices can severely undermine the expected deployment of IoT solutions in a broad range of scenarios. To face these challenges, emerging software-based networking features can introduce new security enablers, providing further scalability and flexibility required to cope with massive IoT. In this paper, we present a novel framework aiming to exploit SDN/NFV-based security features and devise new efficient integration with existing IoT security approaches. The potential benefits of the proposed framework is validated in two case studies. Finally, a feasibility study is presented, accounting for potential interactions with open-source SDN/NFV projects and relevant standardization activities.
Several defect prediction models proposed are effective when historical datasets are available. Defect prediction becomes difficult when no historical data exist. Cross-project defect prediction (CPDP), which uses projects from other sources/companies to predict the defects in the target projects proposed in recent studies has shown promising results. However, the performance of most CPDP approaches are still beyond satisfactory mainly due to distribution mismatch between the source and target projects. In this study, a credibility theory based Naïve Bayes (CNB) classifier is proposed to establish a novel reweighting mechanism between the source projects and target projects so that the source data could simultaneously adapt to the target data distribution and retain its own pattern. Our experimental results show that the feasibility of the novel algorithm design and demonstrate the significant improvement in terms of the performance metrics considered achieved by CNB over other CPDP approaches.
In recent years, the chaos based cryptographic algorithms have enabled some new and efficient ways to develop secure image encryption techniques. In this paper, we propose a new approach for image encryption based on chaotic maps in order to meet the requirements of secure image encryption. The chaos based image encryption technique uses simple chaotic maps which are very sensitive to original conditions. Using mixed chaotic maps which works based on simple substitution and transposition techniques to encrypt the original image yields better performance with less computation complexity which in turn gives high crypto-secrecy. The initial conditions for the chaotic maps are assigned and using that seed only the receiver can decrypt the message. The results of the experimental, statistical analysis and key sensitivity tests show that the proposed image encryption scheme provides an efficient and secure way for image encryption.
Software discovery is a key management function to ensure that systems are free of vulnerabilities, comply with licensing requirements, and support advanced search for systems containing given software. Today, software is predominantly discovered through querying package management tools, or using rules that check for file metadata or contents. These approaches are inadequate as not every software is installed through package managers, and agile development practices lead to frequent deployment of software. Other approaches to software discovery use machine learning methods requiring training phase, or require maintaining knowledge bases. Columbus uses the knowledge of the software packaging practices that evolved over time, and uses the information embedded in the file system impression created by a software package to discover it. Columbus is able to discover software in 92% of all official Docker images. Further, Columbus can be used in problem diagnosis and drift detection situations to compare two different systems, or to determine the evolution of a system overtime.
Data sharing is a significant functionality in cloud storage. These cloud storage provider are answerable for keeping the data obtainable and available in addition to the physical environment protected and running. Here we can securely, efficiently, and flexibly share data with others in cloud storage. A new public-key cryptosystems is planned which create constant-size cipher texts such that efficient allocation of decryption rights for any set of cipher texts are achievable. The uniqueness means that one can aggregate any set of secret keys and make them as packed in as a single key, but encircling the power of all the keys being aggregated. This packed in aggregate key can be easily sent to others or be stored in a smart card with very restricted secure storage. In KAC, users encrypt a file with single key, that means every file have each file, also there will be aggregate keys for two or more files, which formed by using the tree structure. Through this, the user can share more files with a single key at a time.