Biblio
Computing systems today have a large number of security configuration settings that enforce security properties. However, vulnerabilities and incorrect configuration increase the potential for attacks. Provable verification and simulation tools have been introduced to eliminate configuration conflicts and weaknesses, which can increase system robustness against attacks. Most of these tools require special knowledge in formal methods and precise specification for requirements in special languages, in addition to their excessive need for computing resources. Video games have been utilized by researchers to make educational software more attractive and engaging. Publishing these games for crowdsourcing can also stimulate competition between players and increase the game educational value. In this paper we introduce a game interface, called NetMaze, that represents the network configuration verification problem as a video game and allows for attack analysis. We aim to make the security analysis and hardening usable and accurately achievable, using the power of video games and the wisdom of crowdsourcing. Players can easily discover weaknesses in network configuration and investigate new attack scenarios. In addition, the gameplay scenarios can also be used to analyze and learn attack attribution considering human factors. In this paper, we present a provable mapping from the network configuration to 3D game objects.
This paper presents a unified approach for the detection of network anomalies. Current state of the art methods are often able to detect one class of anomalies at the cost of others. Our approach is based on using a Linear Dynamical System (LDS) to model network traffic. An LDS is equivalent to Hidden Markov Model (HMM) for continuous-valued data and can be computed using incremental methods to manage high-throughput (volume) and velocity that characterizes Big Data. Detailed experiments on synthetic and real network traces shows a significant improvement in detection capability over competing approaches. In the process we also address the issue of robustness of network anomaly detection systems in a principled fashion.
Cloud Computing is emerging as a new paradigm that aims delivering computing as a utility. For the cloud computing paradigm to be fully adopted and effectively used, it is critical that the security mechanisms are robust and resilient to faults and attacks. Securing cloud systems is extremely complex due to the many interdependent tasks such as application layer firewalls, alert monitoring and analysis, source code analysis, and user identity management. It is strongly believed that we cannot build cloud services that are immune to attacks. Resiliency to attacks is becoming an important approach to address cyber-attacks and mitigate their impacts. Resiliency for mission critical systems is demanded higher. In this paper, we present a methodology to develop an Autonomic Resilient Cloud Management (ARCM) based on moving target defense, cloud service Behavior Obfuscation (BO), and autonomic computing. By continuously and randomly changing the cloud execution environments and platform types, it will be difficult especially for insider attackers to figure out the current execution environment and their existing vulnerabilities, thus allowing the system to evade attacks. We show how to apply the ARCM to one class of applications, Map/Reduce, and evaluate its performance and overhead.
This work presents a novel method to estimate natural expressed emotions in speech through binary acoustic modeling. Standard acoustic features are mapped to a binary value representation and a support vector regression model is used to correlate them with the three-continuous emotional dimensions. Three different sets of speech features, two based on spectral parameters and one on prosody are compared on the VAM corpus, a set of spontaneous dialogues from a German TV talk-show. The regression analysis, in terms of correlation coefficient and mean absolute error, show that the binary key modeling is able to successfully capture speaker emotion characteristics. The proposed algorithm obtains comparable results to those reported on the literature while it relies on a much smaller set of acoustic descriptors. Furthermore, we also report on preliminary results based on the combination of the binary models, which brings further performance improvements.
Analysing cyber attack environments yield tremendous insight into adversory behavior, their strategy and capabilities. Designing cyber intensive games that promote offensive and defensive activities to capture or protect assets assist in the understanding of cyber situational awareness. There exists tangible metrics to characterizing games such as CTFs to resolve the intensity and aggression of a cyber attack. This paper synthesizes the characteristics of InCTF (India CTF) and provides an understanding of the types of vulnerabilities that have the potential to cause significant damage by trained hackers. The two metrics i.e. toxicity and effectiveness and its relation to the final performance of each team is detailed in this context.
In order to deal with shortcomings of security management systems, this work proposes a methodology based on agents paradigm for cybersecurity risk management. In this approach a system is decomposed in agents that may be used to attain goals established by attackers. Threats to business are achieved by attacker's goals in service and deployment agents. To support a proactive behavior, sensors linked to security mechanisms are analyzed accordingly with a model for Situational Awareness(SA)[4].
Cyber scanning refers to the task of probing enterprise networks or Internet wide services, searching for vulnerabilities or ways to infiltrate IT assets. This misdemeanor is often the primarily methodology that is adopted by attackers prior to launching a targeted cyber attack. Hence, it is of paramount importance to research and adopt methods for the detection and attribution of cyber scanning. Nevertheless, with the surge of complex offered services from one side and the proliferation of hackers' refined, advanced, and sophisticated techniques from the other side, the task of containing cyber scanning poses serious issues and challenges. Furthermore recently, there has been a flourishing of a cyber phenomenon dubbed as cyber scanning campaigns - scanning techniques that are highly distributed, possess composite stealth capabilities and high coordination - rendering almost all current detection techniques unfeasible. This paper presents a comprehensive survey of the entire cyber scanning topic. It categorizes cyber scanning by elaborating on its nature, strategies and approaches. It also provides the reader with a classification and an exhaustive review of its techniques. Moreover, it offers a taxonomy of the current literature by focusing on distributed cyber scanning detection methods. To tackle cyber scanning campaigns, this paper uniquely reports on the analysis of two recent cyber scanning incidents. Finally, several concluding remarks are discussed.
Specifics of an alias-free digitizer application for compressed digitizing and recording of wideband signals are considered. Signal sampling in this case is performed on the basis of picosecond resolution event timing, the digitizer actually is a subsystem of Event Timer A033-ET and specific events that are detected and then timed are the signal and reference sine-wave crossings. The used approach to development of this subsystem is described and some results of experimental studies are given.
This paper describes the development of subsymbolic ACT-R models for the Concentration game. Performance data is taken from an experiment in which participants played the game un- der two conditions: minimizing the number of mismatches/ turns during a game, and minimizing the time to complete a game. Conflict resolution and parameter tuning are used to implement an accuracy model and a speed model that capture the differences for the two conditions. Visual attention drives exploration of the game board in the models. Modeling re- sults are generally consistent with human performance, though some systematic differences can be seen. Modeling decisions, model limitations, and open issues are discussed.
We introduce a framework for controlling the energy provided or absorbed by distributed energy resources (DERs) in power distribution networks. In this framework, there is a set of agents referred to as aggregators that interact with the wholesale electricity market, and through some market-clearing mechanism, are requested (and will be compensated for) to provide (or absorb) certain amount of active (or reactive) power over some period of time. In order to fulfill the request, each aggregator interacts with a set of DERs and offers them some price per unit of active (or reactive) power they provide (or absorb); the objective is for the aggregator to design a pricing strategy for incentivizing DERs to change its active (or reactive) power consumption (or production) so as they collectively provide the amount that the aggregator has been asked for. In order to make a decision, each DER uses the price information provided by the aggregator and some estimate of the average active (or reactive) power that neighboring DERs can provide computed through some exchange of information among them; this exchange is described by a connected undirected graph. The focus is on the DER strategic decision-making process, which we cast as a game. In this context, we provide sufficient conditions on the aggregator's pricing strategy under which this game has a unique Nash equilibrium. Then, we propose a distributed iterative algorithm that adheres to the graph that describes the exchange of information between DERs that allows them to seek for this Nash equilibrium. We illustrate our results through several numerical simulations.
Presented as part of the DIMACS Workshop on Energy Infrastructure: Designing for Stability and Resilience, Rutgers University, Piscataway, NJ, February 20-22, 2013
The goal of this roadmap paper is to summarize the stateof-the-art and identify research challenges when developing, deploying and managing self-adaptive software systems. Instead of dealing with a wide range of topics associated with the field, we focus on four essential topics of self-adaptation: design space for self-adaptive solutions, software engineering processes for self-adaptive systems, from centralized to decentralized control, and practical run-time verification & validation for self-adaptive systems. For each topic, we present an overview, suggest future directions, and focus on selected challenges. This paper complements and extends a previous roadmap on software engineering for self-adaptive systems published in 2009 covering a different set of topics, and reflecting in part on the previous paper. This roadmap is one of the many results of the Dagstuhl Seminar 10431 on Software Engineering for Self-Adaptive Systems, which took place in October 2010.
The simplest and purest practical object-oriented language designs
today are seen in dynamically-typed languages, such as Smalltalk
and Self. Static types, however, have potential benefits for productivity,
security, and reasoning about programs. In this paper, we describe
the design of Wyvern, a statically typed, pure object-oriented
language that attempts to retain much of the simplicity and expressiveness
of these iconic designs.
Our goals lead us to combine pure object-oriented and functional
abstractions in a simple, typed setting. We present a foundational
object-based language that we believe to be as close as
one can get to simple typed lambda calculus while keeping objectorientation.
We show how this foundational language can be translated
to the typed lambda calculus via standard encodings. We then
define a simple extension to this language that introduces classes
and show that classes are no more than sugar for the foundational
object-based language. Our future intention is to demonstrate that
modules and other object-oriented features can be added to our language
as not more than such syntactical extensions while keeping
the object-oriented core as pure as possible.
The design of Wyvern closely follows both historical and modern
ideas about the essence of object-orientation, suggesting a new
way to think about a minimal, practical, typed core language for
objects.