Biblio
This work considers the trade-off between security and performance when revealing partial information about encrypted data computed on. The focus of our work is on information revealed through control flow side-channels when executing programs on encrypted data. We use quantitative information flow to measure security, running time to measure performance and program transformation techniques to alter the trade-off between the two. Combined with information flow policies, we perform a policy-aware security and performance trade-off (PASAPTO) analysis. We formalize the problem of PASAPTO analysis as an optimization problem, prove the NP-hardness of the corresponding decision problem and present two algorithms solving it heuristically. We implemented our algorithms and combined them with the Dataflow Authentication (DFAuth) approach for outsourcing sensitive computations. Our DFAuth Trade-off Analyzer (DFATA) takes Java Bytecode operating on plaintext data and an associated information flow policy as input. It outputs semantically equivalent program variants operating on encrypted data which are policy-compliant and approximately Pareto-optimal with respect to leakage and performance. We evaluated DFATA in a commercial cloud environment using Java programs, e.g., a decision tree program performing machine learning on medical data. The decision tree variant with the worst performance is 357% slower than the fastest variant. Leakage varies between 0% and 17% of the input.
Internet Service Providers (ISPs) have an economic and operational interest in detecting malicious network activity relating to their subscribers. However, it is unclear what kind of traffic data an ISP has available for cyber-security research, and under which legal conditions it can be used. This paper gives an overview of the challenges posed by legislation and of the data sources available to a European ISP. DNS and NetFlow logs are identified as relevant data sources and the state of the art in anonymization and fingerprinting techniques is discussed. Based on legislation, data availability and privacy considerations, a practically applicable anonymization policy is presented.
A common way to characterize security enforcement mechanisms is based on the time at which they operate. Mechanisms operating before a program's execution are static mechanisms, and mechanisms operating during a program's execution are dynamic mechanisms. This paper introduces a different perspective and classifies mechanisms based on the granularity of program code that they monitor. Classifying mechanisms in this way provides a unified view of security mechanisms and shows that all security mechanisms can be encoded as dynamic mechanisms that operate at different levels of program code granularity. The practicality of the approach is demonstrated through a prototype implementation of a framework for enforcing security policies at various levels of code granularity on Java bytecode applications.
Language-based information flow control (IFC) aims to provide guarantees about information propagation in computer systems having multiple security levels. Existing IFC systems extend the lattice model of Denning's, enforcing transitive security policies by tracking information flows along with a partially ordered set of security levels. They yield a transitive noninterference property of either confidentiality or integrity. In this paper, we explore IFC for security policies that are not necessarily transitive. Such nontransitive security policies avoid unwanted or unexpected information flows implied by transitive policies and naturally accommodate high-level coarse-grained security requirements in modern component-based software. We present a novel security type system for enforcing nontransitive security policies. Unlike traditional security type systems that verify information propagation by subtyping security levels of a transitive policy, our type system relaxes strong transitivity by inferring information flow history through security levels and ensuring that they respect the nontransitive policy in effect. Such a type system yields a new nontransitive noninterference property that offers more flexible information flow relations induced by security policies that do not have to be transitive, therefore generalizing the conventional transitive noninterference. This enables us to directly reason about the extent of information flows in the program and restrict interactions between security-sensitive and untrusted components.
Cloud, Software-Defined Networking (SDN), and Network Function Virtualization (NFV) technologies have introduced a new era of cybersecurity threats and challenges. To protect cloud infrastructure, in our earlier work, we proposed Software Defined Security Service (SDS2) to tackle security challenges centered around a new policy-based interaction model. The security architecture consists of three main components: a Security Controller, Virtual Security Functions (VSF), and a Sec-Manage Protocol. However, the security architecture requires an agile and specific protocol to transfer interaction parameters and security messages between its components where OpenFlow considers mainly as network routing protocol. So, The Sec-Manage protocol has been designed specifically for obtaining policy-based interaction parameters among cloud entities between the security controller and its VSFs. This paper focuses on the design and the implementation of the Sec-Manage protocol and demonstrates its use in setting, monitoring, and conveying relevant policy-based interaction security parameters.
The problem of optimizing the security policy for the composite information system is formulated. Subject-object model for information system is used. Combining different types of security policies is formalized. The target function for the optimization task is recorded. The optimization problem for combining two discretionary security policies is solved. The case of combining two mandatory security policies is studied. The main problems of optimization the composite security policy are formulated. +50 CHMBOJIOB‼!
Aiming at the requirements of network access control, illegal outreach control, identity authentication, security monitoring and application system access control of information network, an integrated network access and behavior control model based on security policy is established. In this model, the network access and behavior management control process is implemented through abstract policy configuration, network device and application server, so that management has device-independent abstraction, and management simplification, flexibility and automation are improved. On this basis, a general framework of policy-based access and behavior management control is established. Finally, an example is given to illustrate the method of device connection, data drive and fusion based on policy-based network access and behavior management control.
Avoiding security vulnerabilities is very important for embedded systems. Dynamic Information Flow Tracking (DIFT) is a powerful technique to analyze SW with respect to security policies in order to protect the system against a broad range of security related exploits. However, existing DIFT approaches either do not exist for Virtual Prototypes (VPs) or fail to model complex hardware/software interactions.In this paper, we present a novel approach that enables early and accurate DIFT of binaries targeting embedded systems with custom peripherals. Leveraging the SystemC framework, our DIFT engine tracks accurate data flow information alongside the program execution to detect violations of security policies at run-time. We demonstrate the effectiveness and applicability of our approach by extensive experiments.
Enterprise networks are increasingly moving towards Software Defined Networking, which is becoming a major trend in the networking arena. With the increased popularity of SDN, there is a greater need for security measures for protecting the enterprise networks. This paper focuses on the design and implementation of an integrated security architecture for SDN based enterprise networks. The integrated security architecture uses a policy-based approach to coordinate different security mechanisms to detect and counteract a range of security attacks in the SDN. A distinguishing characteristic of the proposed architecture is its ability to deal with dynamic changes in the security attacks as well as changes in trust associated with the network devices in the infrastructure. The adaptability of the proposed architecture to dynamic changes is achieved by having feedback between the various security components/mechanisms in the architecture and managing them using a dynamic policy framework. The paper describes the prototype implementation of the proposed architecture and presents security and performance analysis for different attack scenarios. We believe that the proposed integrated security architecture provides a significant step towards achieving a secure SDN for enterprises.
Service providers typically utilize Web APIs to enable the sharing of tenant data and resources with numerous third party web, cloud, and mobile applications. Security mechanisms such as OAuth 2.0 and API keys are commonly applied to manage authorization aspects of such integrations. However, these mechanisms impose functional and security drawbacks both for service providers and their users due to their static design, coarse and context insensitive capabilities, and weak interoperability. Implementing secure, feature-rich, and flexible data sharing services still poses a challenge that many providers face in the process of opening their interfaces to the public.To address these issues, we design the framework that allows pluggable and transparent externalization of authorization functionality for service providers and flexibility in defining and managing security aspects of resource sharing with third parties for their users. Our solution applies a holistic perspective that considers service descriptions, data fragments, security policies, as well as system interactions and states as an integrated space dynamically exposed and collaboratively accessed by agents residing across organizational boundaries.In this work we present design aspects of our contribution and illustrate its practical implementation by analyzing case scenario involving resource sharing of a popular service.
To ensure the accountability of a cloud environment, security policies may be provided as a set of properties to be enforced by cloud providers. However, due to the sheer size of clouds, it can be challenging to provide timely responses to all the requests coming from cloud users at runtime. In this paper, we design and implement a middleware, PERMON, as a pluggable interface to OpenStack for intercepting and verifying the legitimacy of user requests at runtime, while leveraging our previous work on proactive security verification to improve the efficiency. We describe detailed implementation of the middleware and demonstrate its usefulness through a use case.
A lot of organizations need effective resolutions to record and evaluate the existing enormous volume of information. Cloud computing as a facilitator offers scalable resources and noteworthy economic assistances as the decreased operational expenditures. This model increases a wide set of security and privacy problems that have to be taken into reflexion. Multi-occupancy, loss of control, and confidence are the key issues in cloud computing situations. This paper considers the present know-hows and a comprehensive assortment of both previous and high-tech tasks on cloud security and confidentiality. The paradigm shift that supplements the usage of cloud computing is progressively enabling augmentation to safety and privacy contemplations linked with the different facades of cloud computing like multi-tenancy, reliance, loss of control and responsibility. So, cloud platforms that deal with big data that have sensitive information are necessary to use technical methods and structural precautions to circumvent data defence failures that might lead to vast and costly harms.
Despite the wide of range of research and technologies that deal with the problem of routing in computer networks, there remains a gap between the level of network hardware administration and the level of business requirements and constraints. Not much has been accomplished in literature in order to have a direct enforcement of such requirements on the network. This paper presents a new solution in specifying and directly enforcing security policies to control the routing configuration in a software-defined network by using Row-Level Security checks which enable fine-grained security policies on individual rows in database tables. We show, as a first step, how a specific class of such policies, namely multilevel security policies, can be enforced on a database-defined network, which presents an abstraction of a network's configuration as a set of database tables. We show that such policies can be used to control the flow of data in the network either in an upward or downward manner.
To protect sensitive information of an organization, we need to have proper access controls since several data breach incidents were happened because of broken access controls. Normally, the IT auditing process would be used to identify security weaknesses and should be able to detect any potential access control violations in advance. However, most auditing processes are done manually and not performed consistently since lots of resources are required; thus, the auditing is performed for quality assurance purposes only. This paper proposes an automated process to audit the access controls on the Windows server operating system. We define the audit checklist and use the controls defined in ISO/IEC 27002:2013 as a guideline for identifying audit objectives. In addition, an automated audit tool is developed for checking security controls against defined security policies. The results of auditing are the list of automatically generated passed and failed policies. If the auditing is done consistently and automatically, the intrusion incidents could be detected earlier and essential damages could be prevented. Eventually, it would help increase the reliability of the system.
In today's time Software Defined Network (SDN) gives the complete control to get the data flow in the network. SDN works as a central point to which data is administered centrally and traffic is also managed. SDN being open source product is more prone to security threats. The security policies are also to be enforced as it would otherwise let the controller be attacked the most. The attacks like DDOS and DOS attacks are more commonly found in SDN controller. DDOS is destructive attack that normally diverts the normal flow of traffic and starts the over flow of flooded packets halting the system. Machine Learning techniques helps to identify the hidden and unexpected pattern of the network and hence helps in analyzing the network flow. All the classified and unclassified techniques can help detect the malicious flow based on certain parameters like packet flow, time duration, accuracy and precision rate. Researchers have used Bayesian Network, Wavelets, Support Vector Machine and KNN to detect DDOS attacks. As per the review it's been analyzed that KNN produces better result as per the higher precision and giving a lower falser rate for detection. This paper produces better approach of hybrid Machine Learning techniques rather than existing KNN on the same data set giving more accuracy of detecting DDOS attacks on higher precision rate. The result of the traffic with both normal and abnormal behavior is shown and as per the result the proposed algorithm is designed which is suited for giving better approach than KNN and will be implemented later on for future.
Many governments organizations in Libya have started transferring traditional government services to e-government. These e-services will benefit a wide range of public. However, deployment of e-government bring many new security issues. Attackers would take advantages of vulnerabilities in these e-services and would conduct cyber attacks that would result in data loss, services interruptions, privacy loss, financial loss, and other significant loss. The number of vulnerabilities in e-services have increase due to the complexity of the e-services system, a lack of secure programming practices, miss-configuration of systems and web applications vulnerabilities, or not staying up-to-date with security patches. Unfortunately, there is a lack of study being done to assess the current security level of Libyan government websites. Therefore, this study aims to assess the current security of 16 Libyan government websites using penetration testing framework. In this assessment, no exploits were committed or tried on the websites. In penetration testing framework (pen test), there are four main phases: Reconnaissance, Scanning, Enumeration, Vulnerability Assessment and, SSL encryption evaluation. The aim of a security assessment is to discover vulnerabilities that could be exploited by attackers. We also conducted a Content Analysis phase for all websites. In this phase, we searched for security and privacy policies implementation information on the government websites. The aim is to determine whether the websites are aware of current accepted standard for security and privacy. From our security assessment results of 16 Libyan government websites, we compared the websites based on the number of vulnerabilities found and the level of security policies. We only found 9 websites with high and medium vulnerabilities. Many of these vulnerabilities are due to outdated software and systems, miss-configuration of systems and not applying the latest security patches. These vulnerabilities could be used by cyber hackers to attack the systems and caused damages to the systems. Also, we found 5 websites didn't implement any SSL encryption for data transactions. Lastly, only 2 websites have published security and privacy policies on their websites. This seems to indicate that these websites were not concerned with current standard in security and privacy. Finally, we classify the 16 websites into 4 safety categories: highly unsafe, unsafe, somewhat unsafe and safe. We found only 1 website with a highly unsafe ranking. Based on our finding, we concluded that the security level of the Libyan government websites are adequate, but can be further improved. However, immediate actions need to be taken to mitigate possible cyber attacks by fixing the vulnerabilities and implementing SSL encryption. Also, the websites need to publish their security and privacy policy so the users could trust their websites.
Approaches for the automatic analysis of security policies on source code level cannot trivially be applied to binaries. This is due to the lacking high-level semantics of low-level object code, and the fundamental problem that control-flow recovery from binaries is difficult. We present a novel approach to recover the control-flow of binaries that is both safe and efficient. The key idea of our approach is to use the information contained in security mechanisms to approximate the targets of computed branches. To achieve this, we first define a restricted control transition intermediate language (RCTIL), which restricts the number of possible targets for each branch to a finite number of given targets. Based on this intermediate language, we demonstrate how a safe model of the control flow can be recovered without data-flow analyses. Our evaluation shows that that makes our solution more efficient than existing solutions.