Biblio
The increasing demand for secure interactions between network domains brings in new challenges to access control technologies. In this paper we design an access control framework which provides a multilevel mapping method between hierarchical access control structures for achieving multilevel security protection in cross-domain networks. Hierarchical access control structures ensure rigorous multilevel security in intra domains. And the mapping method based on subject attributes is proposed to determine the subject's security level in its target domain. Experimental results we obtained from simulations are also reported in this paper to verify the effectiveness of the proposed access control model.
Establishing and operating an Information Security Management System (ISMS) to protect information values and information systems is in itself a challenge for larger enterprises and small and medium sized businesses alike. A high level of automation is required to reduce operational efforts to an acceptable level when implementing an ISMS. In this paper we present the ADAMANT framework to increase automation in information security management as a whole by establishing a continuous risk-driven and context-aware ISMS that not only automates security controls but considers all highly interconnected information security management tasks. We further illustrate how ADAMANT is suited to establish an ISO 27001 compliant ISMS for small and medium-sized enterprises and how not only the monitoring of security controls but a majority of ISMS related activities can be supported through automated process execution and workflow enactment.
Data outsourcing in cloud is emerging as a successful paradigm that benefits organizations and enterprises with high-performance, low-cost, scalable data storage and sharing services. However, this paradigm also brings forth new challenges for data confidentiality because the outsourced are not under the physic control of the data owners. The existing schemes to achieve the security and usability goal usually apply encryption to the data before outsourcing them to the storage service providers (SSP), and disclose the decryption keys only to authorized user. They cannot ensure the security of data while operating data in cloud where the third-party services are usually semi-trustworthy, and need lots of time to deal with the data. We construct a privacy data management system appending hierarchical access control called HAC-DMS, which can not only assure security but also save plenty of time when updating data in cloud.
Internet-connected embedded systems have limited capabilities to defend themselves against remote hacking attacks. The potential effects of such attacks, however, can have a significant impact in the context of the Internet of Things, industrial control systems, smart health systems, etc. Embedded systems cannot effectively utilize existing software-based protection mechanisms due to limited processing capabilities and energy resources. We propose a novel hardware-based monitoring technique that can detect if the embedded operating system or any running application deviates from the originally programmed behavior due to an attack. We present an FPGA-based prototype implementation that shows the effectiveness of such a security approach.
Verifying that hardware design implementations adhere to specifications is a time intensive and sometimes intractable problem due to the massive size of the system's state space. Formal methods techniques can be used to prove certain tractable specification properties; however, they are expensive, and often require subject matter experts to develop and solve. Nonetheless, hardware verification is a critical process to ensure security and safety properties are met, and encapsulates problems associated with trust and reliability. For complex designs where coverage of the entire state space is unattainable, prioritizing regions most vulnerable to security or reliability threats would allow efficient allocation of valuable verification resources. Stackelberg security games model interactions between a defender, whose goal is to assign resources to protect a set of targets, and an attacker, who aims to inflict maximum damage on the targets after first observing the defender's strategy. In equilibrium, the defender has an optimal security deployment strategy, given the attacker's best response. We apply this Stackelberg security framework to synthesized hardware implementations using the design's network structure and logic to inform defender valuations and verification costs. The defender's strategy in equilibrium is thus interpreted as a prioritization of the allocation of verification resources in the presence of an adversary. We demonstrate this technique on several open-source synthesized hardware designs.
Unattended Wireless Sensor Networks (UWSN) are usually deployed in human-hostile environments. Such architectures raise a challenge to data protection for two main reasons. First, sensors have limited capacities in terms of performance and memory, so not all cryptographic mechanisms can be applied. Moreover, the measurements cannot be immediately gathered, so they have to be kept inside the devices until a mobile sink comes to collect them. This paper introduces a new method for secure and resilient data protection inside UWSN. It is based on a lightweight fragmentation scheme that transforms data collected by a sensor into multiple secure fragments that are distributed over sensor's neighboring nodes in a way that only a certain amount of these fragments is required for data recovery. Moreover, data security is reinforced by the use of a dynamic key refreshed after each visit of the mobile sink. Authentication and integrity information are dispersed within the fragments to protected data from active attacks. Homomorphic properties of the algorithm allow to significantly reduce storage space inside the nodes. Performance and empirical security evaluation results show that the proposed scheme achieves a good trade-off between performance, data protection and memory occupation.
Utility networks are part of every nation's critical infrastructure, and their protection is now seen as a high priority objective. In this paper, we propose a threat awareness architecture for critical infrastructures, which we believe will raise security awareness and increase resilience in utility networks. We first describe an investigation of trends and threats that may impose security risks in utility networks. This was performed on the basis of a viewpoint approach that is capable of identifying technical and non-technical issues (e.g., behaviour of humans). The result of our analysis indicated that utility networks are affected strongly by technological trends, but that humans comprise an important threat to them. This provided evidence and confirmed that the protection of utility networks is a multi-variable problem, and thus, requires the examination of information stemming from various viewpoints of a network. In order to accomplish our objective, we propose a systematic threat awareness architecture in the context of a resilience strategy, which ultimately aims at providing and maintaining an acceptable level of security and safety in critical infrastructures. As a proof of concept, we demonstrate partially via a case study the application of the proposed threat awareness architecture, where we examine the potential impact of attacks in the context of social engineering in a European utility company.
Learning analytics open up a complex landscape of privacy and policy issues, which, in turn, influence how learning analytics systems and practices are designed. Research and development is governed by regulations for data storage and management, and by research ethics. Consequently, when moving solutions out the research labs implementers meet constraints defined in national laws and justified in privacy frameworks. This paper explores how the OECD, APEC and EU privacy frameworks seek to regulate data privacy, with significant implications for the discourse of learning, and ultimately, an impact on the design of tools, architectures and practices that now are on the drawing board. A detailed list of requirements for learning analytics systems is developed, based on the new legal requirements defined in the European General Data Protection Regulation, which from 2018 will be enforced as European law. The paper also gives an initial account of how the privacy discourse in Europe, Japan, South-Korea and China is developing and reflects upon the possible impact of the different privacy frameworks on the design of LA privacy solutions in these countries. This research contributes to knowledge of how concerns about privacy and data protection related to educational data can drive a discourse on new approaches to privacy engineering based on the principles of Privacy by Design. For the LAK community, this study represents the first attempt to conceptualise the issues of privacy and learning analytics in a cross-cultural context. The paper concludes with a plan to follow up this research on privacy policies and learning analytics systems development with a new international study.
The revolution of smart devices has a significant and positive impact on the lives of many people, especially in regard to elements of healthcare. In part, this revolution is attributed to technological advances that enable individuals to wear and use medical devices to monitor their health activities, but remotely. Also, these smart, wearable medical devices assist health care providers in monitoring their patients remotely, thereby enabling physicians to respond quickly in the event of emergencies. An ancillary advantage is that health care costs will be reduced, another benefit that, when paired with prompt medical treatment, indicates significant advances in the contemporary management of health care. However, the competition among manufacturers of these medical devices creates a complexity of small and smart wearable devices such as ECG and EMG. This complexity results in other issues such as patient security, privacy, confidentiality, and identity theft. In this paper, we discuss the design and implementation of a hybrid real-time cryptography algorithm to secure lightweight wearable medical devices. The proposed system is based on an emerging innovative technology between the genomic encryptions and the deterministic chaos method to provide a quick and secure cryptography algorithm for real-time health monitoring that permits for threats to patient confidentiality to be addressed. The proposed algorithm also considers the limitations of memory and size of the wearable health devices. The experimental results and the encryption analysis indicate that the proposed algorithm provides a high level of security for the remote health monitoring system.
As cloud computing becomes increasingly pervasive, it is critical for cloud providers to support basic security controls. Although major cloud providers tout such features, relatively little is known in many cases about their design and implementation. In this paper, we describe several security features in OpenStack, a widely-used, open source cloud computing platform. Our contributions to OpenStack range from key management and storage encryption to guaranteeing the integrity of virtual machine (VM) images prior to boot. We describe the design and implementation of these features in detail and provide a security analysis that enumerates the threats that each mitigates. Our performance evaluation shows that these security features have an acceptable cost-in some cases, within the measurement error observed in an operational cloud deployment. Finally, we highlight lessons learned from our real-world development experiences from contributing these features to OpenStack as a way to encourage others to transition their research into practice.
Distributed Denial of Service (DDoS) is a sophisticated cyber-attack due to its variety of types and techniques. The traditional mitigation method of this attack is to deploy dedicated security appliances such as firewall, load balancer, etc. However, due to the limited capacity of the hardware and the potential high volume of DDoS traffic, it may not be able to defend all the attacks. Therefore, cloud-based DDoS protection services were introduced to allow the organizations to redirect their traffic to the scrubbing centers in the cloud for filtering. This solution has some drawbacks such as privacy violation and latency. More recently, Network Functions Virtualization (NFV) and edge computing have been proposed as new networking service models. In this paper, we design a framework that leverages NFV and edge computing for DDoS mitigation through two-stage processes.
A technique and algorithms for early detection of the started attack and subsequent blocking of malicious traffic are proposed. The primary separation of mixed traffic into trustworthy and malicious traffic was carried out using cluster analysis. Classification of newly arrived requests was done using different classifiers with the help of received training samples and developed success criteria.
Cloud computing, often referred to as simply “the cloud,” is the delivery of on-demand computing resources; everything from applications to data centers over the Internet. Cloud is used not only for storing data, but also the stored data can be shared by multiple users. Due to this, the integrity of cloud data is subject to doubt. Every time it is not possible for user to download all data and verify integrity, so proposed system contain Third Party Auditor (TPA) to verify the integrity of shared data. During auditing, the shared data is kept private from public verifiers, who are able to verify shared data integrity without downloading or retrieving the entire data file. Group signature is used to preserve identity privacy of group members from third party auditor. Privacy preserving is done to ensure that the TPA cannot derive user's data content from the information collected during the auditing process.
Privacy and security have been discussed in many occasions and in most cases, the importance that these two aspects play on the information system domain are mentioned often. Many times, research is carried out on the individual information security or privacy measures where it is commonly regarded with the focus on the particular measure or both privacy and security are regarded as a whole subject. However, there have been no attempts at establishing a proper method in categorizing any form of objects of protection. Through the review done on this paper, we would like to investigate the relationship between privacy and security and form a break down the aspects of privacy and security in order to provide better understanding through determining if a measure or methodology is security, privacy oriented or both. We would recommend that in further research, a further refined formulation should be formed in order to carry out this determination process. As a result, we propose a Privacy-Security Tree (PST) in this paper that distinguishes the privacy from security measures.
In order to support large volume of transactions and number of users, as estimated by the load demand modeling, a system needs to scale in order to continue to satisfy required quality attributes. In particular, for systems exposed to the Internet, scaling up may increase the attack surface susceptible to malicious intrusions. The new proactive approach based on the concept of Moving Target Defense (MTD) should be considered as a complement to current cybersecurity protection. In this paper, we analyze the scalability of the Self Cleansing Intrusion Tolerance (SCIT) MTD approach using Cloud infrastructure services. By applying the model of MTD with continuous rotation and diversity to a multi-node or multi-instance system, we argue that the effectiveness of the approach is dependent on the share-nothing architecture pattern of the large system. Furthermore, adding more resources to the MTD mechanism can compensate to achieve the desired level of secure availability.
Security and privacy issues of the Internet of Things (IoT in short, hereafter) attracts the hot topic of researches through these years. As the relationship between user and server become more complicated than before, the existing security solutions might not provide exhaustive securities in IoT environment and novel solutions become new research challenges, e.g., the solutions based on symmetric cryptosystems are unsuited to handle with the occasion that decryption is only allowed in specific time range. In this paper, a new scalable one-time file encryption scheme combines reliable cryptographic techniques, which is named OTFEP, is proposed to satisfy specialized security requirements. One of OTFEP's key features is that it offers a mechanism to protect files in the database from arbitrary visiting from system manager or third-party auditors. OTFEP uses two different approaches to deal with relatively small file and stream file. Moreover, OTFEP supports good node scalability and secure key distribution mechanism. Based on its practical security and performance, OTFEP can be considered in specific IoT devices where one-time file encryption is necessary.
This publication presents some techniques for insider threats and cryptographic protocols in secure processes. Those processes are dedicated to the information management of strategic data splitting. Strategic data splitting is dedicated to enterprise management processes as well as methods of securely storing and managing this type of data. Because usually strategic data are not enough secure and resistant for unauthorized leakage, we propose a new protocol that allows to protect data in different management structures. The presented data splitting techniques will concern cryptographic information splitting algorithms, as well as data sharing algorithms making use of cognitive data analysis techniques. The insider threats techniques will concern data reconstruction methods and cognitive data analysis techniques. Systems for the semantic analysis and secure information management will be used to conceal strategic information about the condition of the enterprise. Using the new approach, which is based on cognitive systems allow to guarantee the secure features and make the management processes more efficient.
Content Security Policy is a mechanism designed to prevent the exploitation of XSS – the most common high-risk web application flaw. CSP restricts which scripts can be executed by allowing developers to define valid script sources; an attacker with a content-injection flaw should not be able to force the browser to execute arbitrary malicious scripts. Currently, CSP is commonly used in conjunction with domain-based script whitelist, where the existence of a single unsafe endpoint in the script whitelist effectively removes the value of the policy as a protection against XSS ( some examples ).
Cloud computing is revolutionizing many IT ecosystems through offering scalable computing resources that are easy to configure, use and inter-connect. However, this model has always been viewed with some suspicion as it raises a wide range of security and privacy issues that need to be negotiated. This research focuses on the construction of a trust layer in cloud computing to build a trust relationship between cloud service providers and cloud users. In particular, we address the rise of container-based virtualisation has a weak isolation compared to traditional VMs because of the shared use of the OS kernel and system components. Therefore, we will build a trust layer to solve the issues of weaker isolation whilst maintaining the performance and scalability of the approach. This paper has two objectives. Firstly, we propose a security system to protect containers from other guests through the addition of a Role-based Access Control (RBAC) model and the provision of strict data protection and security. Secondly, we provide a stress test using isolation benchmarking tools to evaluate the isolation in containers in term of performance.
Ransomware is a growing threat that encrypts auser's files and holds the decryption key until a ransom ispaid by the victim. This type of malware is responsible fortens of millions of dollars in extortion annually. Worse still, developing new variants is trivial, facilitating the evasion of manyantivirus and intrusion detection systems. In this work, we presentCryptoDrop, an early-warning detection system that alerts a userduring suspicious file activity. Using a set of behavior indicators, CryptoDrop can halt a process that appears to be tampering witha large amount of the user's data. Furthermore, by combininga set of indicators common to ransomware, the system can beparameterized for rapid detection with low false positives. Ourexperimental analysis of CryptoDrop stops ransomware fromexecuting with a median loss of only 10 files (out of nearly5,100 available files). Our results show that careful analysis ofransomware behavior can produce an effective detection systemthat significantly mitigates the amount of victim data loss.
We present RamCrypt, a solution that allows unmodified Linux processes to transparently work on encrypted data. RamCrypt can be deployed and enabled on a per-process basis without recompiling user-mode applications. In every enabled process, data is only stored in cleartext for the moment it is processed, and otherwise stays encrypted in RAM. In particular, the required encryption keys do not reside in RAM, but are stored in CPU registers only. Hence, RamCrypt effectively thwarts memory disclosure attacks, which grant unauthorized access to process memory, as well as physical attacks such as cold boot and DMA attacks. In its default configuration, RamCrypt exposes only up to 4 memory pages in cleartext at the same time. For the nginx web server serving encrypted HTTPS pages under heavy load, the necessary TLS secret key is hidden for 97% of its time.
As non-volatile memory (NVM) technologies are expected to replace DRAM in the near future, new challenges have emerged. For example, NVMs have slow and power-consuming writes, and limited write endurance. In addition, NVMs have a data remanence vulnerability, i.e., they retain data for a long time after being powered off. NVM encryption alleviates the vulnerability, but exacerbates the limited endurance by increasing the number of writes to memory. We observe that, in current systems, a large percentage of main memory writes result from data shredding in operating systems, a process of zeroing out physical pages before mapping them to new processes, in order to protect previous processes' data. In this paper, we propose Silent Shredder, which repurposes initialization vectors used in standard counter mode encryption to completely eliminate the data shredding writes. Silent Shredder also speeds up reading shredded cache lines, and hence reduces power consumption and improves overall performance. To evaluate our design, we run three PowerGraph applications and 26 multi-programmed workloads from the SPEC 2006 suite, on a gem5-based full system simulator. Silent Shredder eliminates an average of 48.6% of the writes in the initialization and graph construction phases. It speeds up main memory reads by 3.3 times, and improves the number of instructions per cycle (IPC) by 6.4% on average. Finally, we discuss several use cases, including virtual machines' data isolation and user-level large data initialization, where Silent Shredder can be used effectively at no extra cost.
The usual approach to security for cloud-hosted applications is strong separation. However, it is often the case that the same data is used by different applications, particularly given the increase in data-driven (`big data' and IoT) applications. We argue that access control for the cloud should no longer be application-specific but should be data-centric, associated with the data that can flow between applications. Indeed, the data may originate outside cloud services from diverse sources such as medical monitoring, environmental sensing etc. Information Flow Control (IFC) potentially offers data-centric, system-wide data access control. It has been shown that IFC can be provided at operating system level as part of a PaaS offering, with an acceptable overhead. In this paper we consider how IFC can be integrated with application-specific access control, transparently from application developers, while building from simple IFC primitives, access control policies that align with the data management obligations of cloud providers and tenants.