Biblio
IT system risk assessments are indispensable due to increasing cyber threats within our ever-growing IT systems. Moreover, laws and regulations urge organizations to conduct risk assessments regularly. Even though there exist several risk management frameworks and methodologies, they are in general high level, not defining the risk metrics, risk metrics values and the detailed risk assessment formulas for different risk views. To address this need, we define a novel risk assessment methodology specific to IT systems. Our model is quantitative, both asset and vulnerability centric and defines low and high level risk metrics. High level risk metrics are defined in two general categories; base and attack graph-based. In our paper, we provide a detailed explanation of formulations in each category and make our implemented software publicly available for those who are interested in applying the proposed methodology to their IT systems.
A common approach for designing scalable algorithms for massive data sets is to distribute the computation across, say k, machines and process the data using limited communication between them. A particularly appealing framework here is the simultaneous communication model whereby each machine constructs a small representative summary of its own data and one obtains an approximate/exact solution from the union of the representative summaries. If the representative summaries needed for a problem are small, then this results in a communication-efficient and $\backslash$emph\round-optimal\ (requiring essentially no interaction between the machines) protocol. Some well-known examples of techniques for creating summaries include sampling, linear sketching, and composable coresets. These techniques have been successfully used to design communication efficient solutions for many fundamental graph problems. However, two prominent problems are notably absent from the list of successes, namely, the maximum matching problem and the minimum vertex cover problem. Indeed, it was shown recently that for both these problems, even achieving a modest approximation factor of $\backslash$polylog\(n)\ requires using representative summaries of size $\backslash$widetilde\$\backslash$Omega\(ntextasciicircum2) i.e. essentially no better summary exists than each machine simply sending its entire input graph. The main insight of our work is that the intractability of matching and vertex cover in the simultaneous communication model is inherently connected to an adversarial partitioning of the underlying graph across machines. We show that when the underlying graph is randomly partitioned across machines, both these problems admit $\backslash$emph\randomized composable coresets\ of size $\backslash$widetildeØ\(n) that yield an $\backslash$widetildeØ\(1)-approximate solution$\backslash$footnote\Here and throughout the paper, we use $\backslash$Ot($\backslash$cdot) notation to suppress $\backslash$polylog\(n)\ factors, where n is the number of vertices in the graph. In other words, a small subgraph of the input graph at each machine can be identified as its representative summary and the final answer then is obtained by simply running any maximum matching or minimum vertex cover algorithm on these combined subgraphs. This results in an Õ(1)-approximation simultaneous protocol for these problems with Õ(nk) total communication when the input is randomly partitioned across k machines. We also prove our results are optimal in a very strong sense: we not only rule out existence of smaller randomized composable coresets for these problems but in fact show that our $\backslash$Ot(nk) bound for total communication is optimal for em any simultaneous communication protocol (i.e. not only for randomized coresets) for these two problems. Finally, by a standard application of composable coresets, our results also imply MapReduce algorithms with the same approximation guarantee in one or two rounds of communication, improving the previous best known round complexity for these problems.\vphantom\
Submitted
Over the last decade, a globalization of the software industry took place, which facilitated the sharing and reuse of code across existing project boundaries. At the same time, such global reuse also introduces new challenges to the software engineering community, with not only components but also their problems and vulnerabilities being now shared. For example, vulnerabilities found in APIs no longer affect only individual projects but instead might spread across projects and even global software ecosystem borders. Tracing these vulnerabilities at a global scale becomes an inherently difficult task since many of the existing resources required for such analysis still rely on proprietary knowledge representation. In this research, we introduce an ontology-based knowledge modeling approach that can eliminate such information silos. More specifically, we focus on linking security knowledge with other software knowledge to improve traceability and trust in software products (APIs). Our approach takes advantage of the Semantic Web and its reasoning services, to trace and assess the impact of security vulnerabilities across project boundaries. We present a case study, to illustrate the applicability and flexibility of our ontological modeling approach by tracing vulnerabilities across project and resource boundaries.
Software attacks are commonly performed against embedded systems in order to access private data or to run restricted services. In this work, we demonstrate some vulnerabilities of commonly use processor which can be leveraged by hackers to attack a system. The targeted devices are based on open processor architectures OpenRISC and RISC-V. Several software exploits are discussed and demonstrated while a hardware countermeasure is proposed and validated on OpenRISC against Return Oriented Programming attack.
Generative policies enable devices to generate their own policies that are validated, consistent and conflict free. This autonomy is required for security policy generation to deal with the large number of smart devices per person that will soon become reality. In this paper, we discuss the research issues that have to be addressed in order for devices involved in security enforcement to automatically generate their security policies - enabling policy-based autonomous security management. We discuss the challenges involved in the task of automatic security policy generation, and outline some approaches based om machine learning that may potentially provide a solution to the same.
With the repaid growth of social tagging users, it becomes very important for social tagging systems how the required resources are recommended to users rapidly and accurately. Firstly, the architecture of an agent-based intelligent social tagging system is constructed using agent technology. Secondly, the design and implementation of user interest mining, personalized recommendation and common preference group recommendation are presented. Finally, a self-adaptive recommendation strategy for social tagging and its implementation are proposed based on the analysis to the shortcoming of the personalized recommendation strategy and the common preference group recommendation strategy. The self-adaptive recommendation strategy achieves equilibrium selection between efficiency and accuracy, so that it solves the contradiction between efficiency and accuracy in the personalized recommendation model and the common preference recommendation model.
In this paper, we study the sensor placement problem in urban water networks that maximizes the localization of pipe failures given that some sensors give incorrect outputs. False output of a sensor might be the result of degradation in sensor's hardware, software fault, or might be due to a cyber attack on the sensor. Incorrect outputs from such sensors can have any possible values which could lead to an inaccurate localization of a failure event. We formulate the optimal sensor placement problem with erroneous sensors as a set multicover problem, which is NP-hard, and then discuss a polynomial time heuristic to obtain efficient solutions. In this direction, we first examine the physical model of the disturbance propagating in the network as a result of a failure event, and outline the multi-level sensing model that captures several event features. Second, using a combinatorial approach, we solve the problem of sensor placement that maximizes the localization of pipe failures by selecting m sensors out of which at most e give incorrect outputs. We propose various localization performance metrics, and numerically evaluate our approach on a benchmark and a real water distribution network. Finally, using computational experiments, we study relationships between design parameters such as the total number of sensors, the number of sensors with errors, and extracted signal features.
In the past decade, the information security and threat landscape has grown significantly making it difficult for a single defender to defend against all attacks at the same time. This called for introducing information sharing, a paradigm in which threat indicators are shared in a community of trust to facilitate defenses. Standards for representation, exchange, and consumption of indicators are proposed in the literature, although various issues are undermined. In this paper, we take the position of rethinking information sharing for actionable intelligence, by highlighting various issues that deserve further exploration. We argue that information sharing can benefit from well-defined use models, threat models, well-understood risk by measurement and robust scoring, well-understood and preserved privacy and quality of indicators and robust mechanism to avoid free riding behavior of selfish agents. We call for using the differential nature of data and community structures for optimizing sharing designs and structures.
The evolution of information and communication technologies has brought new challenges in managing the Internet. Software-Defined Networking (SDN) aims to provide easily configured and remotely controlled networks based on centralized control. Since SDN will be the next disruption in networking, SDN security has become a hot research topic because of its importance in communication systems. A centralized controller can become a focal point of attack, thus preventing attack in controller will be a priority. The whole network will be affected if attacker gain access to the controller. One of the attacks that affect SDN controller is DDoS attacks. This paper reviews different detection techniques that are available to prevent DDoS attacks, characteristics of these techniques and issues that may arise using these techniques.
Hardware Malware Detectors (HMDs) have recently been proposed as a defense against the proliferation of malware. These detectors use low-level features, that can be collected by the hardware performance monitoring units on modern CPUs to detect malware as a computational anomaly. Several aspects of the detector construction have been explored, leading to detectors with high accuracy. In this paper, we explore the question of how well evasive malware can avoid detection by HMDs. We show that existing HMDs can be effectively reverse-engineered and subsequently evaded, allowing malware to hide from detection without substantially slowing it down (which is important for certain types of malware). This result demonstrates that the current generation of HMDs can be easily defeated by evasive malware. Next, we explore how well a detector can evolve if it is exposed to this evasive malware during training. We show that simple detectors, such as logistic regression, cannot detect the evasive malware even with retraining. More sophisticated detectors can be retrained to detect evasive malware, but the retrained detectors can be reverse-engineered and evaded again. To address these limitations, we propose a new type of Resilient HMDs (RHMDs) that stochastically switch between different detectors. These detectors can be shown to be provably more difficult to reverse engineer based on resent results in probably approximately correct (PAC) learnability theory. We show that indeed such detectors are resilient to both reverse engineering and evasion, and that the resilience increases with the number and diversity of the individual detectors. Our results demonstrate that these HMDs offer effective defense against evasive malware at low additional complexity.
Information-leakage is one of the most important security issues in the current Internet. In Named-Data Networking (NDN), Interest names introduce novel vulnerabilities that can be exploited. By setting up a malware, Interest names can be used to encode critical information (steganography embedded) and to leak information out of the network by generating anomalous Interest traffic. This security threat based on Interest names does not exist in IP network, and it is essential to solve this issue to secure the NDN architecture. This paper performs risk analysis of information-leakage in NDN. We first describe vulnerabilities with Interest names and, as countermeasures, we propose a name-based filter using search engine information, and another filter using one-class Support Vector Machine (SVM). We collected URLs from the data repository provided by Common Crawl and we evaluate the performances of our per-packet filters. We show that our filters can choke drastically the throughput of information-leakage, which makes it easier to detect anomalous Interest traffic. It is therefore possible to mitigate information-leakage in NDN network and it is a strong incentive for future deployment of this architecture at the Internet scale.
In this paper, we present a decentralized nonlinear robust controller to enhance the transient stability margin of synchronous generators. Although, the trend in power system control is shifting towards centralized or distributed controller approaches, the remote data dependency of these schemes fuels cyber-physical security issues. Since the excessive delay or losing remote data affect severely the operation of those controllers, the designed controller emerges as an alternative for stabilization of Smart Grids in case of unavailability of remote data and in the presence of plant parametric uncertainties. The proposed controller actuates distributed storage systems such as flywheels in order to reduce stabilization time and it implements a novel input time delay compensation technique. Lyapunov stability analysis proves that all the tracking error signals are globally uniformly ultimately bounded. Furthermore, the simulation results demonstrate that the proposed controller outperforms traditional local power systems controllers such as Power System Stabilizers.
We present OpenFace, our new open-source face recognition system that approaches state-of-the-art accuracy. Integrating OpenFace with inter-frame tracking, we build RTFace, a mechanism for denaturing video streams that selectively blurs faces according to specified policies at full frame rates. This enables privacy management for live video analytics while providing a secure approach for handling retrospective policy exceptions. Finally, we present a scalable, privacy-aware architecture for large camera networks using RTFace.
Cloud computing has become a widely used computing paradigm providing on-demand computing and storage capabilities based on pay-as-you-go model. Recently, many organizations, especially in the field of big data, have been adopting the cloud model to perform data analytics through leasing powerful Virtual Machines (VMs). VMs can be attractive targets to attackers as well as untrusted cloud providers who aim to get unauthorized access to the business critical-data. The obvious security solution is to perform data analytics on encrypted data through the use of cryptographic keys as that of the Advanced Encryption Standard (AES). However, it is very easy to obtain AES cryptographic keys from the VM's Random Access Memory (RAM). In this paper, we present a novel key-scattering (KS) approach to protect the cryptographic keys while encrypting/decrypting data. Our solution is highly portable and interoperable. Thus, it could be integrated within today's existing cloud architecture without the need for further modifications. The feasibility of the approach has been proven by implementing a functioning prototype. The evaluation results show that our approach is substantially more resilient to brute force attacks and key extraction tools than the standard AES algorithm, with acceptable execution time.
Software Defined Networking (SDN) presents a unique opportunity to manage and orchestrate cloud networks. The educational institutions, like many other industries face a lot of security threats. We have established an SDN enabled Demilitarized Zone (DMZ) — Science DMZ to serve as testbed for securing ASU Internet2 environment. Science DMZ allows researchers to conduct in-depth analysis of security attacks and take necessary countermeasures using SDN based command and control (C&C) center. Demo URL: https : //www.youtube.corn/watchlv = 8yo2lTNV 3r4.
The Public Key Infrastructure (PKI) in use today on the Internet to secure communications has several drawbacks arising from its centralised and non-transparent design. In the past there has been instances of certificate authorities publishing rogue certificates for targeted attacks, and this has been difficult to immediately detect as certificate authorities are not transparent about the certificates they issue. Furthermore, the centralised selection of trusted certificate authorities by operating system and browser vendors means that it is not practical to untrust certificate authorities that have issued rogue certificates, as this would disrupt the TLS process for many other hosts. SCPKI is an alternative PKI system based on a decentralised and transparent design using a web-of-trust model and a smart contract on the Ethereum blockchain, to make it easily possible for rogue certificates to be detected when they are published. The web-of-trust model is designed such that an entity or authority in the system can verify (or vouch for) fine-grained attributes of another entity's identity (such as company name or domain name), as an alternative to the centralised certificate authority identity verification model.
Lehigh University has set a goal to implement System Center Configuration Manager by the end of 2017. This project is being spearheaded by one of our Senior Computing Consultants who has been researching and trained in the Microsoft Virtualization stack. We will discuss our roadmaps, results from our proof-of-concept environments, and discussions in driving this project.
The unauthorized access or theft of sensitive, personal information is becoming a weekly news item. The illegal dissemination of proprietary information to media outlets or competitors costs industry untold millions in remediation costs and losses every year. The 2013 data breach at Target, Inc. that impacted 70 million customers is estimated to cost upwards of 1 billion dollars. Stolen information is also being used to damage political figures and adversely influence foreign and domestic policy. In this paper, we offer some techniques for better understanding the health and security of our networks. This understanding will help professionals to identify network behavior, anomalies and other latent, systematic issues in their networks. Software-Defined Networks (SDN) enable the collection of network operation and configuration metrics that are not readily available, if available at all, in traditional networks. SDN also enables the development of software protocols and tools that increases visibility into the network. By accumulating and analyzing a time series data repository (TSDR) of SDN and traditional metrics along with data gathered from our tools we can establish behavior and security patterns for SDN and SDN hybrid networks. Our research helps provide a framework for a range of techniques for administrators and automated system protection services that give insight into the health and security of the network. To narrow the scope of our research, this paper focuses on a subset of those techniques as they apply to the confidence analysis of a specific network path at the time of use or inspection. This confidence analysis allows users, administrators and autonomous systems to decide whether a network path is secure enough for sending their sensitive information. Our testing shows that malicious activity can be identified quickly as a single metric indicator and consistently within a multi-factor indicator analysis. Our research includes the implementation of - hese techniques in a network path confidence analysis service, called Confidence Assessment as a Service. Using our behavior and security patterns, this service evaluates a specific network path and provides a confidence score for that path before, during and after the transmission of sensitive data. Our research and tools give administrators and autonomous systems a much better understanding of the internal operation and configuration of their networks. Our framework will also provide other services that will focus on detecting latent, systemic network problems. By providing a better understanding of network configuration and operation our research enables a more secure and dependable network and helps prevent the theft of information by malicious actors.



