Biblio
"Moving fast, and breaking things", instead of "being safe and secure", is the credo of the IT industry. However, if we look at the wide societal impact of IT security incidents in the past years, it seems like it is no longer sustainable. Just like in the case of Equifax, people simply forget updates, just like in the case of Maersk, companies do not use sufficient network segmentation. Security certification does not seem to help with this issue. After all, Equifax was IS027001 compliant.In this paper, we take a look at how we handle and (do not) learn from security incidents in IT security. We do this by comparing IT security incidents to early and later aviation safety. We find interesting parallels to early aviation safety, and outline the governance levers that could make the world of IT more secure, which were already successful in making flying the most secure way of transportation.
e-Government is needed to actualize clean, effective, transparent and accountable governance as well as quality and reliable public services. The implementation of e-Government is currently constrained because there is no derivative regulation, one of which is the regulation for e-Government Security. To answer this need, this study aims to provide input on performance management and governance systems for e-Government Security with the hope that the control design for e-Government Security can be met. The results of this study are the e-Government Security Governance System taken from 28 core models of COBIT 2019. The 28 core models were taken using CSF and risk. Furthermore, performance management for this governance system consists of capability and maturity levels which is an extension of the evaluation process in the e-Government Evaluation Guidelines issued by the Ministry of PAN & RB. The evaluation of the design carried out by determining the current condition of capability and maturity level in Badan XYZ. The result of the evaluation shows that the design possible to be implemented and needed.
Personal cryptographic keys are the foundation of many secure services, but storing these keys securely is a challenge, especially if they are used from multiple devices. Storing keys in a centralized location, like an Internet-accessible server, raises serious security concerns (e.g. server compromise). Hardware-based Trusted Execution Environments (TEEs) are a well-known solution for protecting sensitive data in untrusted environments, and are now becoming available on commodity server platforms. Although the idea of protecting keys using a server-side TEE is straight-forward, in this paper we validate this approach and show that it enables new desirable functionality. We describe the design, implementation, and evaluation of a TEE-based Cloud Key Store (CKS), an online service for securely generating, storing, and using personal cryptographic keys. Using remote attestation, users receive strong assurance about the behaviour of the CKS, and can authenticate themselves using passwords while avoiding typical risks of password-based authentication like password theft or phishing. In addition, this design allows users to i) define policy-based access controls for keys; ii) delegate keys to other CKS users for a specified time and/or a limited number of uses; and iii) audit all key usages via a secure audit log. We have implemented a proof of concept CKS using Intel SGX and integrated this into GnuPG on Linux and OpenKeychain on Android. Our CKS implementation performs approximately 6,000 signature operations per second on a single desktop PC. The latency is in the same order of magnitude as using locally-stored keys, and 20x faster than smart cards.
The collection of students' sensible data raises adverse reactions against Learning Analytics that decreases the confidence in its adoption. The laws and policies that surround the use of educational data are not enough to ensure privacy, security, validity, integrity and reliability of students' data. This problem has been detected through literature review and can be solved if a technological layer of automated checking rules is added above these policies. The aim of this thesis is to research about an emerging technology such as blockchain to preserve the identity of students and secure their data. In a first stage a systematic literature review will be conducted in order to set the context of the research. Afterwards, and through the scientific method, we will develop a blockchain based solution to automate rules and constraints with the aim to let students the governance of their data and to ensure data privacy and security.
As a frequent participant in eSociety, Willeke is often preoccupied with paperwork because there is no easy to use, affordable way to act as a qualified person in the digital world. Confidential interactions take place over insecure channels like e-mail and post. This situation poses risks and costs for service providers, civilians and governments, while goals regarding confidentiality and privacy are not always met. The objective of this paper is to demonstrate an alternative architecture in which identifying persons, exchanging information, authorizing external parties and signing documents will become more user-friendly and secure. As a starting point, each person has their personal data space, provided by a qualified trust service provider that also issues a high level of assurance electronic ID. Three main building blocks are required: (1) secure exchange between the personal data space of each person, (2) coordination functionalities provided by a token based infrastructure, and (3) governance over this infrastructure. Following the design science research approach, we developed prototypes of the building blocks that we will pilot in practice. Policy makers and practitioners that want to enable Willeke to get rid of her paperwork can find guidance throughout this paper and are welcome to join the pilots in the Netherlands.
Attribute-based access control (ABAC) is a general access control model that subsumes numerous earlier access control models. Its increasing popularity stems from the intuitive generic structure of granting permissions based on application and domain attributes of users, subjects, objects, and other entities in the system. Multiple formal and informal languages have been developed to express policies in terms of such attributes. The utility of ABAC policy languages is potentially undermined without a properly formalized underlying model. The high-level structure in a majority of ABAC models consists of sets of tokens and sets of sets, expressions that demand that the reader unpack multiple levels of sets and tokens to determine what things mean. The resulting reduced readability potentially endangers correct expression, reduces maintainability, and impedes validation. These problems could be magnified in models that employ nonuniform representations of actions and their governing policies. We propose to avoid these magnified problems by recasting the high-level structure of ABAC models in a logical formalism that treats all actions (by users and others) uniformly and that keeps existing policy languages in place by interpreting their attributes in terms of the restructured model. In comparison to existing ABAC models, use of a logical language for model formalization, including hierarchies of types of entities and attributes, promises improved expressiveness in specifying the relationships between and requirements on application and domain attributes. A logical modeling language also potentially improves flexibility in representing relationships as attributes to support some widely used policy languages. Consistency and intelligibility are improved by using uniform means for representing different types of controlled actions—such as regular access control actions, administrative actions, and user logins—and their governing policies. Logical languages also provide a well-defined denotational semantics supported by numerous formal inference and verification tools.
Interoperability has become a crucial value in European e-government developments, as promoted by the Digital Single Market strategy and the Tallinn Declaration. The European Union and its Member States have made considerable investments in improving the understanding of interoperability and in developing interoperable building blocks to support cross-border data exchange and public service provisioning. This includes recent updates of the European Interoperability Framework (EIF) and European Interoperability Reference Architecture (EIRA), as well as the publication of a number of generic and domain specific architecture and solutions building blocks such as digital identification or electronic delivery services. While in the previous version of the EIF, interoperability governance was not clearly developed, the new version of 2017 puts interoperability governance as a concept that spans across the different interoperability layers (legal, organizational, semantic and technical) and that builds the frame for interoperability overall. In this paper, we develop a definition of interoperability governance from a literature review and we put forward a model to investigate interoperability governance models at European and Member State levels. Based on several case studies of EU institutions and Member States, we could draw recommendations for what the key aspects of interoperability governance are to successfully diffuse interoperability into public service provisioning.
If, as most experts agree, the mathematical basis of major blockchain systems is (probably if not provably) sound, why do they have a bad reputation? Human misbehavior (such as failed Bitcoin exchanges) accounts for some of the issues, but there are also deeper and more interesting vulnerabilities here. These include design faults and code-level implementation defects, ecosystem issues (such as wallets), as well as approaches such as the "51% attack" all of which can compromise the integrity of blockchain systems. With particular attention to the emerging non-financial applications of blockchain technology, this paper demonstrates the kinds of attacks that are possible and provides suggestions for minimizing the risks involved.
The cloud computing paradigm enables enterprises to realise significant cost savings whilst boosting their agility and productivity. However, security and privacy concerns generally deter enterprises from migrating their critical data to the cloud. One way to alleviate these concerns, hence bolster the adoption of cloud computing, is to devise adequate security policies that control the manner in which these data are stored and accessed in the cloud. Nevertheless, for enterprises to entrust these policies, a framework capable of providing assurances about their correctness is required. This work proposes such a framework. In particular, it proposes an approach that enables enterprises to define their own view of what constitutes a correct policy through the formulation of an appropriate set of well-formedness constraints. These constraints are expressed ontologically thus enabling–-by virtue of semantic inferencing–- automated reasoning about their satisfaction by the policies.
By embracing the cloud computing paradigm enterprises are able to boost their agility and productivity whilst realising significant cost savings. However, many enterprises are reluctant to adopt cloud services for supporting their critical operations due to security and privacy concerns. One way to alleviate these concerns is to devise policies that infuse suitable security controls in cloud services. This work proposes a class of ontologically-expressed rules, namely the so-called axiomatic rules, that aim at ensuring the correctness of these policies by harnessing the various knowledge artefacts that they embody. It also articulates an adequate framework for the expression of policies, one which provides ontological templates for modelling the knowledge artefacts encoded in the policies and which form the basis for the proposed axiomatic rules.
Mobile application offloading, with the purpose of extending battery lifetime and increasing performance has been intensively discussed recently, resulting in various different solutions: mobile device clones operated as virtual machines in the cloud, simultaneously running applications on the mobile device and on a distant server, as well as flexible solutions dynamically acquiring other mobile devices' resources in the user's surrounding. Existing solutions have gaps in the fields of data security and application security. These gaps can be closed by integrating data usage policies, as well as application-flow policies. In this paper, we propose and evaluate a novel approach of integrating XACML into existing mobile application offloading-frameworks. Data owners remain in full control of their data, still, technologies like device-to-device offloading can be used.
In presence of known and unknown vulnerabilities in code and flow control of programs, virtual machine alike isolation and sandboxing to confine maliciousness of process, by monitoring and controlling the behaviour of untrusted application, is an effective strategy. A confined malicious application cannot effect system resources and other applications running on same operating system. But present techniques used for sandboxing have some drawbacks ranging from scope to methodology. Some of proposed techniques restrict specific aspect of execution e.g. system calls and file system access. In the same way techniques that truly isolate the application by providing separate execution environment either require modification in kernel or full blown operating system. Moreover these do not provide isolation from top to bottom but only virtualize operating system services. In this paper, we propose a design to confine native Linux process in virtual machine equivalent isolation by using hardware virtualization extensions with nominal initialization and acceptable execution overheads. We implemented our prototype called Process Virtual Machine that transition a native process into virtual machine, provides minimal possible execution environment, intercept and virtualize system calls to execute it on host kernel. Experimental results show effectiveness of our proposed technique.
Internet of Things is gaining research attention as one of the important fields that will affect our daily life vastly. Today, around us this revolutionary technology is growing and evolving day by day. This technology offers certain benefits like automatic processing, improved logistics and device communication that would help us to improve our social life, health, living standards and infrastructure. However, due to their simple architecture and presence on wide variety of fields they pose serious concern to security. Due to the low end architecture there are many security issues associated with IoT network devices. In this paper, we try to address the security issue by proposing JavaScript sandbox as a method to execute IoT program. Using this sandbox we also implement the strategy to control the execution of the sandbox while the program is being executed on it.
In a world where highly skilled actors involved in cyber-attacks are constantly increasing and where the associated underground market continues to expand, organizations should adapt their defence strategy and improve consequently their security incident management. In this paper, we give an overview of Advanced Persistent Threats (APT) attacks life cycle as defined by security experts. We introduce our own compiled life cycle model guided by attackers objectives instead of their actions. Challenges and opportunities related to the specific camouflage actions performed at the end of each APT phase of the model are highlighted. We also give an overview of new APT protection technologies and discuss their effectiveness at each one of life cycle phases.
Phishing is a form of online identity theft that deceives unaware users into disclosing their confidential information. While significant effort has been devoted to the mitigation of phishing attacks, much less is known about the entire life-cycle of these attacks in the wild, which constitutes, however, a main step toward devising comprehensive anti-phishing techniques. In this paper, we present a novel approach to sandbox live phishing kits that completely protects the privacy of victims. By using this technique, we perform a comprehensive real-world assessment of phishing attacks, their mechanisms, and the behavior of the criminals, their victims, and the security community involved in the process – based on data collected over a period of five months. Our infrastructure allowed us to draw the first comprehensive picture of a phishing attack, from the time in which the attacker installs and tests the phishing pages on a compromised host, until the last interaction with real victims and with security researchers. Our study presents accurate measurements of the duration and effectiveness of this popular threat, and discusses many new and interesting aspects we observed by monitoring hundreds of phishing campaigns.
Separation of network control from devices in Software Defined Network (SDN) allows for centralized implementation and management of security policies in a cloud computing environment. The ease of programmability also makes SDN a great platform implementation of various initiatives that involve application deployment, dynamic topology changes, and decentralized network management in a multi-tenant data center environment. Dynamic change of network topology, or host reconfiguration in such networks might require corresponding changes to the flow rules in the SDN based cloud environment. Verifying adherence of these new flow policies in the environment to the organizational security policies and ensuring a conflict free environment is especially challenging. In this paper, we extend the work on rule conflicts from a traditional environment to an SDN environment, introducing a new classification to describe conflicts stemming from cross-layer conflicts. Our framework ensures that in any SDN based cloud, flow rules do not have conflicts at any layer; thereby ensuring that changes to the environment do not lead to unintended consequences. We demonstrate the correctness, feasibility and scalability of our framework through a proof-of-concept prototype.
Legacy work on correcting firewall anomalies operate with the premise of creating totally disjunctive rules. Unfortunately, such solutions are impractical from implementation point of view as they lead to an explosion of the number of firewall rules. In a related previous work, we proposed a new approach for performing assisted corrective actions, which in contrast to the-state-of-the-art family of radically disjunctive approaches, does not lead to a prohibitive increase of the configuration size. In this sense, we allow relaxation in the correction process by clearly distinguishing between constructive anomalies that can be tolerated and destructive anomalies that should be systematically fixed. However, a main disadvantage of the latter approach was its dependency on the guided input from the administrator which controversially introduces a new risk for human errors. In order to circumvent the latter disadvantage, we present in this paper a Firewall Policy Query Engine (FPQE) that renders the whole process of anomaly resolution a fully automated one and which does not require any human intervention. In this sense, instead of prompting the administrator for inserting the proper order corrective actions, FPQE executes those queries against a high level firewall policy. We have implemented the FPQE and the first results of integrating it with our legacy anomaly resolver are promising.
Federated cloud networks are formed by federating virtual network segments from different clouds, e.g. in a hybrid cloud, into a single federated network. Such networks should be protected with a global federated cloud network security policy. The availability of network function virtualisation and service function chaining in cloud platforms offers an opportunity for implementing and enforcing global federated cloud network security policies. In this paper we describe an approach for enforcing global security policies in federated cloud networks. The approach relies on a service manifest that specifies the global network security policy. From this manifest configurations of the security functions for the different clouds of the federation are generated. This enables automated deployment and configuration of network security functions across the different clouds. The approach is illustrated with a case study where communications between trusted and untrusted clouds, e.g. public clouds, are encrypted. The paper discusses future work on implementing this architecture for the OpenStack cloud platform with the service function chaining API.
In modern enterprises, incorrect or inconsistent security policies can lead to massive damage, e.g., through unintended data leakage. As policy authors have different skills and background knowledge, usable policy editors have to be tailored to the author's individual needs and to the corresponding application domain. However, the development of individual policy editors and the customization of existing ones is an effort consuming task. In this paper, we present a framework for generating tailored policy editors. In order to empower user-friendly and less error-prone specification of security policies, the framework supports multiple platforms, policy languages, and specification paradigms.
Content Security Policy is a mechanism designed to prevent the exploitation of XSS – the most common high-risk web application flaw. CSP restricts which scripts can be executed by allowing developers to define valid script sources; an attacker with a content-injection flaw should not be able to force the browser to execute arbitrary malicious scripts. Currently, CSP is commonly used in conjunction with domain-based script whitelist, where the existence of a single unsafe endpoint in the script whitelist effectively removes the value of the policy as a protection against XSS ( some examples ).
At the core of its nature, security is a highly contextual and dynamic challenge. However, current security policy approaches are usually static, and slow to adapt to ever-changing requirements, let alone catching up with reality. In a 2012 Sophos survey, it was stated that a unique malware is created every half a second. This gives a glimpse of the unsustainable nature of a global problem, any improvement in terms of closing the "time window to adapt" would be a significant step forward. To exacerbate the situation, a simple change in threat and attack vector or even an implementation of the so-called "bring-your-own-device" paradigm will greatly change the frequency of changed security requirements and necessary solutions required for each new context. Current security policies also typically overlook the direct and indirect costs of implementation of policies. As a result, technical teams often fail to have the ability to justify the budget to the management, from a business risk viewpoint. This paper considers both the adaptive and cost-benefit aspects of security, and introduces a novel context-aware technique for designing and implementing adaptive, optimized security policies. Our approach leverages the capabilities of stochastic programming models to optimize security policy planning, and our preliminary results demonstrate a promising step towards proactive, context-aware security policies.
Institutions use the information security (InfoSec) policy document as a set of rules and guidelines to govern the use of the institutional information resources. However, a common problem is that these policies are often not followed or complied with. This study explores the extent to which the problem lies with the policy documents themselves. The InfoSec policies are documented in the natural languages, which are prone to ambiguity and misinterpretation. Subsequently such policies may be ambiguous, thereby making it hard, if not impossible for users to comply with. A case study approach with a content analysis was conducted. The research explores the extent of the problem by using a case study of an educational institution in South Africa.