Biblio
Organizations face the issue of how to best allocate their security resources. Thus, they need an accurate method for assessing how many new vulnerabilities will be reported for the operating systems (OSs) they use in a given time period. Our approach consists of clustering vulnerabilities by leveraging the text information within vulnerability records, and then simulating the mean value function of vulnerabilities by relaxing the monotonic intensity function assumption, which is prevalent among the studies that use software reliability models (SRMs) and nonhomogeneous Poisson process (NHPP) in modeling. We applied our approach to the vulnerabilities of four OSs: Windows, Mac, IOS, and Linux. For the OSs analyzed in terms of curve fitting and prediction capability, our results, compared to a power-law model without clustering issued from a family of SRMs, are more accurate in all cases we analyzed.
The advancement in technology has changed how people work and what software and hardware people use. From conventional personal computer to GPU, hardware technology and capability have dramatically improved so does the operating systems that come along. Unfortunately, current industry practice to compare OS is performed with single perspective. It is either benchmark the hardware level performance or performs penetration testing to check the security features of an OS. This rigid method of benchmarking does not really reflect the true performance of an OS as the performance analysis is not comprehensive and conclusive. To illustrate this deficiency, the study performed hardware level and operational level benchmarking on Windows XP, Windows 7 and Windows 8 and the results indicate that there are instances where Windows XP excels over its newer counterparts. Overall, the research shows Windows 8 is a superior OS in comparison to its predecessors running on the same hardware. Furthermore, the findings also show that the automated benchmarking tools are proved less efficient benchmark systems that run on Windows XP and older OS as they do not support DirectX 11 and other advanced features that the hardware supports. There lies the need to have a unified benchmarking approach to compare other aspects of OS such as user oriented tasks and security parameters to provide a complete comparison. Therefore, this paper is proposing a unified approach for Operating System (OS) comparisons with the help of a Windows OS case study. This unified approach includes comparison of OS from three aspects which are; hardware level, operational level performance and security tests.
Digital forensic investigators today are faced with numerous problems when recovering footprints of criminal activity that involve the use of computer systems. Investigators need the ability to recover evidence in a forensically sound manner, even when criminals actively work to alter the integrity, veracity, and provenance of data, applications and software that are used to support illicit activities. In many ways, operating systems (OS) can be strengthened from a technological viewpoint to support verifiable, accurate, and consistent recovery of system data when needed for forensic collection efforts. In this paper, we extend the ideas for forensic-friendly OS design by proposing the use of a practical form of computing on encrypted data (CED) and computing with encrypted functions (CEF) which builds upon prior work on component encryption (in circuits) and white-box cryptography (in software). We conduct experiments on sample programs to provide analysis of the approach based on security and efficiency, illustrating how component encryption can strengthen key OS functions and improve tamper-resistance to anti-forensic activities. We analyze the tradeoff space for use of the algorithm in a holistic approach that provides additional security and comparable properties to fully homomorphic encryption (FHE).
In this paper, we present a security and privacy enhancement (SPE) framework for unmodified mobile operating systems. SPE introduces a new layer between the application and the operating system and does not require a device be jailbroken or utilize a custom operating system. We utilize an existing ontology designed for enforcing security and privacy policies on mobile devices to build a policy that is customizable. Based on this policy, SPE provides enhancements to native controls that currently exist on the platform for privacy and security sensitive components. SPE allows access to these components in a way that allows the framework to ensure the application is truthful in its declared intent and ensure that the user's policy is enforced. In our evaluation we verify the correctness of the framework and the computing impact on the device. Additionally, we discovered security and privacy issues in several open source applications by utilizing the SPE Framework. From our findings, if SPE is adopted by mobile operating systems producers, it would provide consumers and businesses the additional privacy and security controls they demand and allow users to be more aware of security and privacy issues with applications on their devices.
Choosing how to write natural language scenarios is challenging, because stakeholders may over-generalize their descriptions or overlook or be unaware of alternate scenarios. In security, for example, this can result in weak security constraints that are too general, or missing constraints. Another challenge is that analysts are unclear on where to stop generating new scenarios. In this paper, we introduce the Multifactor Quality Method (MQM) to help requirements analysts to empirically collect system constraints in scenarios based on elicited expert preferences. The method combines quantitative statistical analysis to measure system quality with qualitative coding to extract new requirements. The method is bootstrapped with minimal analyst expertise in the domain affected by the quality area, and then guides an analyst toward selecting expert-recommended requirements to monotonically increase system quality. We report the results of applying the method to security. This include 550 requirements elicited from 69 security experts during a bootstrapping stage, and subsequent evaluation of these results in a verification stage with 45 security experts to measure the overall improvement of the new requirements. Security experts in our studies have an average of 10 years of experience. Our results show that using our method, we detect an increase in the security quality ratings collected in the verification stage. Finally, we discuss how our proposed method helps to improve security requirements elicitation, analysis, and measurement.
Critical information systems strongly rely on event logging techniques to collect data, such as housekeeping/error events, execution traces and dumps of variables, into unstructured text logs. Event logs are the primary source to gain actionable intelligence from production systems. In spite of the recognized importance, system/application logs remain quite underutilized in security analytics when compared to conventional and structured data sources, such as audit traces, network flows and intrusion detection logs. This paper proposes a method to measure the occurrence of interesting activity (i.e., entries that should be followed up by analysts) within textual and heterogeneous runtime log streams. We use an entropy-based approach, which makes no assumptions on the structure of underlying log entries. Measurements have been done in a real-world Air Traffic Control information system through a data analytics framework. Experiments suggest that our entropy-based method represents a valuable complement to security analytics solutions.
Mobile applications have grown from knowing basic personal information to knowing intimate details of consumer's lives. The explosion of knowledge that applications contain and share can be contributed to many factors. Mobile devices are equipped with advanced sensors including GPS and cameras, while storing large amounts of personal information including photos and contacts. With millions of applications available to install, personal data is at constant risk of being misused. While mobile operating systems provide basic security and privacy controls, they are insufficient, leaving the consumer unaware of how applications are using permissions that were granted. In this paper, we propose a solution that aims to provide consumers awareness of applications misusing data and policies that can protect their data. From this investigation we present SPEProxy. SPEProxy utilizes a knowledge based approach to provide consumer's an ability to understand how applications are using permissions beyond their stated intent. Additionally, SPEProxy provides an awareness of fine grained policies that would allow the user to protect their data. SPEProxy is device and mobile operating system agnostic, meaning it does not require a specific device or operating system nor modification to the operating system or applications. This approach allows consumers to utilize the solution without requiring a high degree of technical expertise. We evaluated SPEProxy across 817 of the most popular applications in the iOS App Store and Google Play. In our evaluation, SPEProxy was highly effective across 86.55% applications where several well known applications exhibited misusing granted permissions.
OpenFlow has recently emerged as a powerful paradigm to help build dynamic, adaptive and agile networks. By decoupling control plane from data plane, OpenFlow allows network operators to program a centralized intelligence, OpenFlow controller, to manage network-wide traffic flows to meet the changing needs. However, from the security's point of view, a buggy or even malicious controller could compromise the control logic, and then the entire network. Even worse, the recent attack Stuxnet on industrial control systems also indicates the similar, severe threat to OpenFlow controllers from the commercial operating systems they are running on. In this paper, we comprehensively studied the attack vectors against the OpenFlow critical component, controller, and proposed a cross layer diversity approach that enables OpenFlow controllers to detect attacks, corruptions, failures, and then automatically continue correct execution. Case studies demonstrate that our approach can protect OpenFlow controllers from threats coming from compromised operating systems and themselves.
Supercomputers are widely applied in various domains, which have advantage of high processing capability and mass storage. With growing supercomputing users, the system security receives comprehensive attentions, and becomes more and more important. In this paper, according to the characteristics of supercomputing environment, we perform an in-depth analysis of existing security problems in the process of using resources. To solve these problems, we propose a security analysis method and a prototype system for supercomputing users' behavior. The basic idea is to restore the complete users' behavior paths and operation records based on the supercomputing business process and track the use of resources. Finally, the method is evaluated and the results show that the security analysis method of users' behavior can help administrators detect security incidents in time and respond quickly. The final purpose is to optimize and improve the security level of the whole system.
GENI (Global Environment for Network Innovations) is a National Science Foundation (NSF) funded program which provides a virtual laboratory for networking and distributed systems research and education. It is well suited for exploring networks at a scale, thereby promoting innovations in network science, security, services and applications. GENI allows researchers obtain compute resources from locations around the United States, connect compute resources using 100G Internet2 L2 service, install custom software or even custom operating systems on these compute resources, control how network switches in their experiment handle traffic flows, and run their own L3 and above protocols. GENI architecture incorporates cloud federation. With the federation, cloud resources can be federated and/or community of clouds can be formed. The heart of federation is user identity and an ability to “advertise” cloud resources into community including compute, storage, and networking. GENI administrators can carve out what resources are available to the community and hence a portion of GENI resources are reserved for internal consumption. GENI architecture also provides “stitching” of compute and storage resources researchers request. This provides L2 network domain over Internet2's 100G network. And researchers can run their Software Defined Networking (SDN) controllers on the provisioned L2 network domain for a complete control of networking traffic. This capability is useful for large science data transfer (bypassing security devices for high throughput). Renaissance Computing Institute (RENCI), a research institute in the state of North Carolina, has developed ORCA (Open Resource Control Architecture), a GENI control framework. ORCA is a distributed resource orchestration system to serve science experiments. ORCA provides compute resources as virtual machines and as well as baremetals. ORCA based GENI ra- k was designed to serve both High Throughput Computing (HTC) and High Performance Computing (HPC) type of computes. Although, GENI is primarily used in various universities and research entities today, GENI architecture can be leveraged in the commercial, aerospace and government settings. This paper will go over the architecture of GENI and discuss the GENI architecture for scientific computing experiments.
One essential functionality of a modern operating system is to accurately account for the resource usage of the underlying hardware. This is especially important for computing systems that operate on battery power, since energy management requires accurately attributing resource uses to processes. However, components such as sensors, actuators and specialized network interfaces are often used in an asynchronous fashion, and makes it difficult to conduct accurate resource accounting. For example, a process that makes a request to a sensor may not be running on the processor for the full duration of the resource usage; and current mechanisms of resource accounting fail to provide accurate accounting for such asynchronous uses. This paper proposes a new mechanism to accurately account for the asynchronous usage of resources in mobile systems. Our insight is that by accurately relating the user requests with kernel requests to device and corresponding device responses, we can accurately attribute resource use to the requesting process. Our prototype implemented in Linux demonstrates that we can account for the usage of asynchronous resources such as GPS and WiFi accurately.
In the last decade, the number of available cores increased and heterogeneity grew. In this work, we ask the question whether the design of the current operating systems (OSes) is still appropriate if these trends continue and lead to abundantly available but heterogeneous cores, or whether it forces a fundamental rethinking of how systems are designed. We argue that: 1. hiding heterogeneity behind a common hardware interface unifies, to a large extent, the control and coordination of cores and accelerators in the OS, 2. isolating at the network-on-chip rather than with processor features (like privileged mode, memory management unit, ...), allows running untrusted code on arbitrary cores, and 3. providing OS services via protocols over the network-on-chip, instead of via system calls, makes them accessible to arbitrary types of cores as well. In summary, this turns accelerators into first-class citizens and enables a single and convenient programming environment for all cores without the need to trust any application. In this paper, we introduce network-on-chip-level isolation, present the design of our microkernel-based OS, M3, and the common hardware interface, and evaluate the performance of our prototype in comparison to Linux. A bit surprising, without using accelerators, M3 outperforms Linux in some application-level benchmarks by more than a factor of five.
Information security management is time-consuming and error-prone. Apart from day-to-day operations, organizations need to comply with industrial regulations or government directives. Thus, organizations are looking for security tools to automate security management tasks and daily operations. Security Content Automation Protocol (SCAP) is a suite of specifications that help to automate security management tasks such as vulnerability measurement and policy compliance evaluation. SCAP benchmark provides detailed guidance on setting the security configuration of network devices, operating systems, and applications. Organizations can use SCAP benchmark to perform automated configuration compliance assessment on network devices, operating systems, and applications. This paper discusses SCAP benchmark components and the development of a SCAP benchmark for automating Cisco router security configuration compliance.
Server honey pots are computer systems that hide in a network capturing attack packets. As the name goes, server honey pots are installed in server machines running a set of services. Enterprises and government organisations deploy these honey pots to know the extent of attacks on their network. Since, most of the recent attacks are advanced persistent attacks there is much research work going on in building better peripheral security measures. In this paper, the authors have deployed several honey pots in a virtualized environment to gather traces of malicious activities. The network infrastructure is resilient and provides much information about hacker's activities. It is cost-effective and can be easily deployed in any organisation without specialized hardware.
Almost all commodity IT devices include firmware and software components from non-US suppliers, potentially introducing grave vulnerabilities to homeland security by enabling cyber-attacks via flaws injected into these devices through the supply chain. However, determining that a given device is free of any and all implementation flaws is computationally infeasible in the general case; hence a critical part of any vetting process is prioritizing what kinds of flaws are likely to enable potential adversary goals. We present Theseus, a four-year research project sponsored by the DARPA VET program. Theseus will provide technology to automatically map and explore the firmware/software (FW/SW) architecture of a commodity IT device and then generate attack scenarios for the device. From these device attack scenarios, Theseus then creates a prioritized checklist of FW/SW components to check for potential vulnerabilities. Theseus combines static program analysis, attack graph generation algorithms, and a Boolean satisfiability solver to automate the checklist generation workflow. We describe how Theseus exploits analogies between the commodity IT device problem and attack graph generation for networks. We also present a novel approach called Component Interaction Mapping to recover a formal model of a device's FW/SW architecture from which attack scenarios can be generated.
Digital Forensics is an area of Forensics Science that uses the application of scientific method toward crime investigation. The thwarting of forensic evidence is known as anti-forensics, the aim of which is ambiguous in the sense that it could be bad or good. The aim of this project is to simulate digital crimes scenario and carry out forensic and anti-forensic analysis to enhance security. This project uses several forensics and anti-forensic tools and techniques to carry out this work. The data analyzed were gotten from result of the simulation. The results reveal that although it might be difficult to investigate digital crime but with the help of sophisticated forensic tools/anti-forensics tools it can be accomplished.
Based on the analysis relationships of challenger and attestation in remote attestation process, we propose a dynamic remote attestation model based on concerns. By combines the trusted root and application of dynamic credible monitoring module, Convert the Measurement for all load module of integrity measurement architecture into the Attestation of the basic computing environments, dynamic credible monitoring module, and request service software module. Discuss the rationality of the model. The model used Merkel hash tree to storage applications software integrity metrics, both to protect the privacy of the other party application software, and also improves the efficiency of remote attestation. Experimental prototype system shows that the model can verify the dynamic behavior of the software, to make up for the lack of static measure.
Denial-of-Service (DoS) attacks pose a threat to any service provider on the internet. While traditional DoS flooding attacks require the attacker to control at least as much resources as the service provider in order to be effective, so-called low-rate DoS attacks can exploit weaknesses in careless design to effectively deny a service using minimal amounts of network traffic. This paper investigates one such weakness found within version 2.2 of the popular Apache HTTP Server software. The weakness concerns how the server handles the persistent connection feature in HTTP 1.1. An attack simulator exploiting this weakness has been developed and shown to be effective. The attack was then studied with spectral analysis for the purpose of examining how well the attack could be detected. Similar to other papers on spectral analysis of low-rate DoS attacks, the results show that disproportionate amounts of energy in the lower frequencies can be detected when the attack is present. However, by randomizing the attack pattern, an attacker can efficiently reduce this disproportion to a degree where it might be impossible to correctly identify an attack in a real world scenario.
In order to strengthen network security and improve the network's active defense intrusion detection capabilities, this paper presented and established one active defense intrusion detection system which based on the mixed interactive honeypot. The system can help to reduce the false information, enhance the stability and security of the network. Testing and simulation experiments show that: the system improved active defense of the network's security, increase the honeypot decoy capability and strengthen the attack predictive ability. So it has better application and promotion value.
Summary form only given. In this presentation, several issues regarding operating system security will be investigated. The general problems of OS security are to be addressed. We also discuss why we should consider the security aspects of the OS, and when a secure OS is needed. We delve into the topic of secure OS design as well focusing on covert channel analysis. The specific operating systems under consideration include Windows and Android.
Cyber-attacks continue to pose a major threat to existing critical infrastructure. Although suggestions for defensive strategies abound, Moving Target Defense (MTD) has only recently gained attention as a possible solution for mitigating cyber-attacks. The current work proposes a MTD technique that provides enhanced security through a rotation of multiple operating systems. The MTD solution developed in this research utilizes existing technology to provide a feasible dynamic defense solution that can be deployed easily in a real networking environment. In addition, the system we developed was tested extensively for effectiveness using CORE Impact Pro (CORE), Nmap, and manual penetration tests. The test results showed that platform diversity and rotation offer improved security. In addition, the likelihood of a successful attack decreased proportionally with time between rotations.
Protecting modern computer systems and complex software stacks against the growing range of possible attacks is becoming increasingly difficult. The architecture of modern commodity systems allows attackers to subvert privileged system software often using a single exploit. Once the system is compromised, inclusive permissions used by current architectures and operating systems easily allow a compromised high-privileged software layer to perform arbitrary malicious activities, even on behalf of other software layers. This paper presents a hardware-supported page permission scheme for the physical pages that is based on the concept of non-inclusive sets of memory permissions for different layers of system software such as hypervisors, operating systems, and user-level applications. Instead of viewing privilege levels as an ordered hierarchy with each successive level being more privileged, we view them as distinct levels each with its own set of permissions. Such a permission mechanism, implemented as part of a processor architecture, provides a common framework for defending against a range of recent attacks. We demonstrate that such a protection can be achieved with negligible performance overhead, low hardware complexity and minimal changes to the commodity OS and hypervisor code.
We are currently moving from the Internet society to a mobile society where more and more access to information is done by previously dumb phones. For example, the number of mobile phones using a full blown OS has risen to nearly 200% from Q3/2009 to Q3/2010. As a result, mobile security is no longer immanent, but imperative. This survey paper provides a concise overview of mobile network security, attack vectors using the back end system and the web browser, but also the hardware layer and the user as attack enabler. We show differences and similarities between "normal" security and mobile security, and draw conclusions for further research opportunities in this area.