Visible to the public Biblio

Found 625 results

Filters: Keyword is Measurement  [Clear All Filters]
2017-02-27
Mohsen, R., Pinto, A. M..  2015.  Algorithmic information theory for obfuscation security. 2015 12th International Joint Conference on e-Business and Telecommunications (ICETE). 04:76–87.

The main problem in designing effective code obfuscation is to guarantee security. State of the art obfuscation techniques rely on an unproven concept of security, and therefore are not regarded as provably secure. In this paper, we undertake a theoretical investigation of code obfuscation security based on Kolmogorov complexity and algorithmic mutual information. We introduce a new definition of code obfuscation that requires the algorithmic mutual information between a code and its obfuscated version to be minimal, allowing for controlled amount of information to be leaked to an adversary. We argue that our definition avoids the impossibility results of Barak et al. and is more advantageous then obfuscation indistinguishability definition in the sense it is more intuitive, and is algorithmic rather than probabilistic.

2017-02-21
J. Pan, R. Jain, S. Paul.  2015.  "Enhanced Evaluation of the Interdomain Routing System for Balanced Routing Scalability and New Internet Architecture Deployments". IEEE Systems Journal. 9:892-903.

Internet is facing many challenges that cannot be solved easily through ad hoc patches. To address these challenges, many research programs and projects have been initiated and many solutions are being proposed. However, before we have a new architecture that can motivate Internet service providers (ISPs) to deploy and evolve, we need to address two issues: 1) know the current status better by appropriately evaluating the existing Internet; and 2) find how various incentives and strategies will affect the deployment of the new architecture. For the first issue, we define a series of quantitative metrics that can potentially unify results from several measurement projects using different approaches and can be an intrinsic part of future Internet architecture (FIA) for monitoring and evaluation. Using these metrics, we systematically evaluate the current interdomain routing system and reveal many “autonomous-system-level” observations and key lessons for new Internet architectures. Particularly, the evaluation results reveal the imbalance underlying the interdomain routing system and how the deployment of FIAs can benefit from these findings. With these findings, for the second issue, appropriate deployment strategies of the future architecture changes can be formed with balanced incentives for both customers and ISPs. The results can be used to shape the short- and long-term goals for new architectures that are simple evolutions of the current Internet (so-called dirty-slate architectures) and to some extent to clean-slate architectures.

2017-02-14
G. G. Granadillo, J. Garcia-Alfaro, H. Debar, C. Ponchel, L. R. Martin.  2015.  "Considering technical and financial impact in the selection of security countermeasures against Advanced Persistent Threats (APTs)". 2015 7th International Conference on New Technologies, Mobility and Security (NTMS). :1-6.

This paper presents a model to evaluate and select security countermeasures from a pool of candidates. The model performs industrial evaluation and simulations of the financial and technical impact associated to security countermeasures. The financial impact approach uses the Return On Response Investment (RORI) index to compare the expected impact of the attack when no response is enacted against the impact after applying security countermeasures. The technical impact approach evaluates the protection level against a threat, in terms of confidentiality, integrity, and availability. We provide a use case on malware attacks that shows the applicability of our model in selecting the best countermeasure against an Advanced Persistent Threat.

2016-07-13
Giulia Fanti, University of Illinois at Urbana-Champaign, Peter Kairouz, University of Illinois at Urbana-Champaign, Sewoong Oh, University of at Urbana-Champaign, Kannan Ramchandra, University of California, Berkeley, Pramod Viswanath, University of Illinois at Urbana-Champaign.  2016.  Rumor Source Obfuscation on Irregular Trees. ACM SIGMETRICS.

Anonymous messaging applications have recently gained popularity as a means for sharing opinions without fear of judgment or repercussion. These messages propagate anonymously over a network, typically de ned by social connections or physical proximity. However, recent advances in rumor source detection show that the source of such an anonymous message can be inferred by certain statistical inference attacks. Adaptive di usion was recently proposed as a solution that achieves optimal source obfuscation over regular trees. However, in real social networks, the degrees difer from node to node, and adaptive di usion can be signicantly sub-optimal. This gap increases as the degrees become more irregular.

In order to quantify this gap, we model the underlying network as coming from standard branching processes with i.i.d. degree distributions. Building upon the analysis techniques from branching processes, we give an analytical characterization of the dependence of the probability of detection achieved by adaptive di usion on the degree distribution. Further, this analysis provides a key insight: passing a rumor to a friend who has many friends makes the source more ambiguous. This leads to a new family of protocols that we call Preferential Attachment Adaptive Di usion (PAAD). When messages are propagated according to PAAD, we give both the MAP estimator for nding the source and also an analysis of the probability of detection achieved by this adversary. The analytical results are not directly comparable, since the adversary's observed information has a di erent distribution under adaptive di usion than under PAAD. Instead, we present results from numerical experiments that suggest that PAAD achieves a lower probability of detection, at the cost of increased communication for coordination.

2015-05-06
Kebin Liu, Qiang Ma, Wei Gong, Xin Miao, Yunhao Liu.  2014.  Self-Diagnosis for Detecting System Failures in Large-Scale Wireless Sensor Networks. Wireless Communications, IEEE Transactions on. 13:5535-5545.

Existing approaches to diagnosing sensor networks are generally sink based, which rely on actively pulling state information from sensor nodes so as to conduct centralized analysis. First, sink-based tools incur huge communication overhead to the traffic-sensitive sensor networks. Second, due to the unreliable wireless communications, sink often obtains incomplete and suspicious information, leading to inaccurate judgments. Even worse, it is always more difficult to obtain state information from problematic or critical regions. To address the given issues, we present a novel self-diagnosis approach, which encourages each single sensor to join the fault decision process. We design a series of fault detectors through which multiple nodes can cooperate with each other in a diagnosis task. Fault detectors encode the diagnosis process to state transitions. Each sensor can participate in the diagnosis by transiting the detector's current state to a new state based on local evidences and then passing the detector to other nodes. Having sufficient evidences, the fault detector achieves the Accept state and outputs a final diagnosis report. We examine the performance of our self-diagnosis tool called TinyD2 on a 100-node indoor testbed and conduct field studies in the GreenOrbs system, which is an operational sensor network with 330 nodes outdoor.
 

Hoos, E..  2014.  Design method for developing a Mobile Engineering-Application Middleware (MEAM). Pervasive Computing and Communications Workshops (PERCOM Workshops), 2014 IEEE International Conference on. :176-177.

Mobile Apps running on smartphones and tablet pes offer a new possibility to enhance the work of engineers because they provide an easy-to-use, touchscreen-based handling and can be used anytime and anywhere. Introducing mobile apps in the engineering domain is difficult because the IT environment is heterogeneous and engineering-specific challenges in the app development arise e. g., large amount of data and high security requirements. There is a need for an engineering-specific middleware to facilitate and standardize the app development. However, such a middleware does not yet exist as well as a holistic set of requirements for the development. Therefore, we propose a design method which offers a systematic procedure to develop Mobile Engineering-Application Middleware.

Hammi, B., Khatoun, R., Doyen, G..  2014.  A Factorial Space for a System-Based Detection of Botcloud Activity. New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on. :1-5.

Today, beyond a legitimate usage, the numerous advantages of cloud computing are exploited by attackers, and Botnets supporting DDoS attacks are among the greatest beneficiaries of this malicious use. Such a phenomena is a major issue since it strongly increases the power of distributed massive attacks while involving the responsibility of cloud service providers that do not own appropriate solutions. In this paper, we present an original approach that enables a source-based de- tection of UDP-flood DDoS attacks based on a distributed system behavior analysis. Based on a principal component analysis, our contribution consists in: (1) defining the involvement of system metrics in a botcoud's behavior, (2) showing the invariability of the factorial space that defines a botcloud activity and (3) among several legitimate activities, using this factorial space to enable a botcloud detection.

Badis, H., Doyen, G., Khatoun, R..  2014.  Understanding botclouds from a system perspective: A principal component analysis. Network Operations and Management Symposium (NOMS), 2014 IEEE. :1-9.

Cloud computing is gaining ground and becoming one of the fast growing segments of the IT industry. However, if its numerous advantages are mainly used to support a legitimate activity, it is now exploited for a use it was not meant for: malicious users leverage its power and fast provisioning to turn it into an attack support. Botnets supporting DDoS attacks are among the greatest beneficiaries of this malicious use since they can be setup on demand and at very large scale without requiring a long dissemination phase nor an expensive deployment costs. For cloud service providers, preventing their infrastructure from being turned into an Attack as a Service delivery model is very challenging since it requires detecting threats at the source, in a highly dynamic and heterogeneous environment. In this paper, we present the result of an experiment campaign we performed in order to understand the operational behavior of a botcloud used for a DDoS attack. The originality of our work resides in the consideration of system metrics that, while never considered for state-of-the-art botnets detection, can be leveraged in the context of a cloud to enable a source based detection. Our study considers both attacks based on TCP-flood and UDP-storm and for each of them, we provide statistical results based on a principal component analysis, that highlight the recognizable behavior of a botcloud as compared to other legitimate workloads.

2015-05-05
Pirinen, R..  2014.  Studies of Integration Readiness Levels: Case Shared Maritime Situational Awareness System. Intelligence and Security Informatics Conference (JISIC), 2014 IEEE Joint. :212-215.

The research question of this study is: How Integration Readiness Level (IRL) metrics can be understood and realized in the domain of border control information systems. The study address to the IRL metrics and their definition, criteria, references, and questionnaires for validation of border control information systems in case of the shared maritime situational awareness system. The target of study is in improvements of ways for acceptance, operational validation, risk assessment, and development of sharing mechanisms and integration of information systems and border control information interactions and collaboration concepts in Finnish national and European border control domains.
 

Sunny, S., Pavithran, V., Achuthan, K..  2014.  Synthesizing perception based on analysis of cyber attack environments. Advances in Computing, Communications and Informatics (ICACCI, 2014 International Conference on. :2027-2030.

Analysing cyber attack environments yield tremendous insight into adversory behavior, their strategy and capabilities. Designing cyber intensive games that promote offensive and defensive activities to capture or protect assets assist in the understanding of cyber situational awareness. There exists tangible metrics to characterizing games such as CTFs to resolve the intensity and aggression of a cyber attack. This paper synthesizes the characteristics of InCTF (India CTF) and provides an understanding of the types of vulnerabilities that have the potential to cause significant damage by trained hackers. The two metrics i.e. toxicity and effectiveness and its relation to the final performance of each team is detailed in this context.
 

Falcon, R., Abielmona, R., Billings, S., Plachkov, A., Abbass, H..  2014.  Risk management with hard-soft data fusion in maritime domain awareness. Computational Intelligence for Security and Defense Applications (CISDA), 2014 Seventh IEEE Symposium on. :1-8.

Enhanced situational awareness is integral to risk management and response evaluation. Dynamic systems that incorporate both hard and soft data sources allow for comprehensive situational frameworks which can supplement physical models with conceptual notions of risk. The processing of widely available semi-structured textual data sources can produce soft information that is readily consumable by such a framework. In this paper, we augment the situational awareness capabilities of a recently proposed risk management framework (RMF) with the incorporation of soft data. We illustrate the beneficial role of the hard-soft data fusion in the characterization and evaluation of potential vessels in distress within Maritime Domain Awareness (MDA) scenarios. Risk features pertaining to maritime vessels are defined a priori and then quantified in real time using both hard (e.g., Automatic Identification System, Douglas Sea Scale) as well as soft (e.g., historical records of worldwide maritime incidents) data sources. A risk-aware metric to quantify the effectiveness of the hard-soft fusion process is also proposed. Though illustrated with MDA scenarios, the proposed hard-soft fusion methodology within the RMF can be readily applied to other domains.
 

Kotenko, I., Novikova, E..  2014.  Visualization of Security Metrics for Cyber Situation Awareness. Availability, Reliability and Security (ARES), 2014 Ninth International Conference on. :506-513.

One of the important direction of research in situational awareness is implementation of visual analytics techniques which can be efficiently applied when working with big security data in critical operational domains. The paper considers a visual analytics technique for displaying a set of security metrics used to assess overall network security status and evaluate the efficiency of protection mechanisms. The technique can assist in solving such security tasks which are important for security information and event management (SIEM) systems. The approach suggested is suitable for displaying security metrics of large networks and support historical analysis of the data. To demonstrate and evaluate the usefulness of the proposed technique we implemented a use case corresponding to the Olympic Games scenario.
 

Manning, F.J., Mitropoulos, F.J..  2014.  Utilizing Attack Graphs to Measure the Efficacy of Security Frameworks across Multiple Applications. System Sciences (HICSS), 2014 47th Hawaii International Conference on. :4915-4920.

One of the primary challenges when developing or implementing a security framework for any particular environment is determining the efficacy of the implementation. Does the implementation address all of the potential vulnerabilities in the environment, or are there still unaddressed issues? Further, if there is a choice between two frameworks, what objective measure can be used to compare the frameworks? To address these questions, we propose utilizing a technique of attack graph analysis to map the attack surface of the environment and identify the most likely avenues of attack. We show that with this technique we can quantify the baseline state of an application and compare that to the attack surface after implementation of a security framework, while simultaneously allowing for comparison between frameworks in the same environment or a single framework across multiple applications.

Hong, J.B., Dong Seong Kim.  2014.  Scalable Security Models for Assessing Effectiveness of Moving Target Defenses. Dependable Systems and Networks (DSN), 2014 44th Annual IEEE/IFIP International Conference on. :515-526.

Moving Target Defense (MTD) changes the attack surface of a system that confuses intruders to thwart attacks. Various MTD techniques are developed to enhance the security of a networked system, but the effectiveness of these techniques is not well assessed. Security models (e.g., Attack Graphs (AGs)) provide formal methods of assessing security, but modeling the MTD techniques in security models has not been studied. In this paper, we incorporate the MTD techniques in security modeling and analysis using a scalable security model, namely Hierarchical Attack Representation Models (HARMs), to assess the effectiveness of the MTD techniques. In addition, we use importance measures (IMs) for scalable security analysis and deploying the MTD techniques in an effective manner. The performance comparison between the HARM and the AG is given. Also, we compare the performance of using the IMs and the exhaustive search method in simulations.

Vaarandi, R., Pihelgas, M..  2014.  Using Security Logs for Collecting and Reporting Technical Security Metrics. Military Communications Conference (MILCOM), 2014 IEEE. :294-299.

During recent years, establishing proper metrics for measuring system security has received increasing attention. Security logs contain vast amounts of information which are essential for creating many security metrics. Unfortunately, security logs are known to be very large, making their analysis a difficult task. Furthermore, recent security metrics research has focused on generic concepts, and the issue of collecting security metrics with log analysis methods has not been well studied. In this paper, we will first focus on using log analysis techniques for collecting technical security metrics from security logs of common types (e.g., Network IDS alarm logs, workstation logs, and Net flow data sets). We will also describe a production framework for collecting and reporting technical security metrics which is based on novel open-source technologies for big data.
 

2015-05-04
Hessami, A..  2014.  A framework for characterisation of complex systems and system of systems. World Automation Congress (WAC), 2014. :346-354.

The objective of this paper is to explore the current notions of systems and “System of Systems” and establish the case for quantitative characterization of their structural, behavioural and contextual facets that will pave the way for further formal development (mathematical formulation). This is partly driven by stakeholder needs and perspectives and also in response to the necessity to attribute and communicate the properties of a system more succinctly, meaningfully and efficiently. The systematic quantitative characterization framework proposed will endeavor to extend the notion of emergence that allows the definition of appropriate metrics in the context of a number of systems ontologies. The general characteristic and information content of the ontologies relevant to system and system of system will be specified but not developed at this stage. The current supra-system, system and sub-system hierarchy is also explored for the formalisation of a standard notation in order to depict a relative scale and order and avoid the seemingly arbitrary attributions.
 

Barbosa de Carvalho, M., Pereira Esteves, R., da Cunha Rodrigues, G., Cassales Marquezan, C., Zambenedetti Granville, L., Rockenbach Tarouco, L.M..  2014.  Efficient configuration of monitoring slices for cloud platform administrators. Computers and Communication (ISCC), 2014 IEEE Symposium on. :1-7.

Monitoring is an important issue in cloud environments because it assures that acquired cloud slices attend the user's expectations. However, these environments are multitenant and dynamic, requiring automation techniques to offload cloud administrators. In a previous work, we proposed FlexACMS: a framework to automate monitoring configuration related to cloud slices using multiple monitoring solutions. In this work, we enhanced FlexACMS to allow dynamic and automatic attribution of monitoring configuration tasks to servers without administrator intervention, which was not available in previous version. FlexACMS also considers the monitoring server load when attributing configuration tasks, which allows load balancing between monitoring servers. The evaluation showed that enhancements reduced FlexACMS response time up to 60% in comparison to previous version. The scalability evaluation of enhanced version demonstrated the feasibility of our approach in large scale cloud environments.
 

Pandey, A.K., Agrawal, C.P..  2014.  Analytical Network Process based model to estimate the quality of software components. Issues and Challenges in Intelligent Computing Techniques (ICICT), 2014 International Conference on. :678-682.

Software components are software units designed to interact with other independently developed software components. These components are assembled by third parties into software applications. The success of final software applications largely depends upon the selection of appropriate and easy to fit components in software application according to the need of customer. It is primary requirement to evaluate the quality of components before using them in the final software application system. All the quality characteristics may not be of same significance for a particular software application of a specific domain. Therefore, it is necessary to identify only those characteristics/ sub-characteristics, which may have higher importance over the others. Analytical Network Process (ANP) is used to solve the decision problem, where attributes of decision parameters form dependency networks. The objective of this paper is to propose ANP based model to prioritize the characteristics /sub-characteristics of quality and to o estimate the numeric value of software quality.
 

Ward, J.R., Younis, M..  2014.  A Metric for Evaluating Base Station Anonymity in Acknowledgement-Based Wireless Sensor Networks. Military Communications Conference (MILCOM), 2014 IEEE. :216-221.

In recent years, Wireless Sensor Networks (WSNs) have become valuable assets to both the commercial and military communities with applications ranging from industrial automation and product tracking to intrusion detection at a hostile border. A typical WSN topology allows sensors to act as data sources that forward their measurements to a central sink or base station (BS). The unique role of the BS makes it a natural target for an adversary that desires to achieve the most impactful attack possible against a WSN. An adversary may employ traffic analysis techniques to identify the BS based on network traffic flow even when the WSN implements conventional security mechanisms. This motivates a need for WSN operators to achieve improved BS anonymity to protect the identity, role, and location of the BS. Although a variety of countermeasures have been proposed to improve BS anonymity, those techniques are typically evaluated based on a WSN that does not employ acknowledgements. In this paper we propose an enhanced evidence theory metric called Acknowledgement-Aware Evidence Theory (AAET) that more accurately characterizes BS anonymity in WSNs employing acknowledgements. We demonstrate AAET's improved robustness to a variety of configurations through simulation.

2015-05-01
Albasrawi, M.N., Jarus, N., Joshi, K.A., Sarvestani, S.S..  2014.  Analysis of Reliability and Resilience for Smart Grids. Computer Software and Applications Conference (COMPSAC), 2014 IEEE 38th Annual. :529-534.

Smart grids, where cyber infrastructure is used to make power distribution more dependable and efficient, are prime examples of modern infrastructure systems. The cyber infrastructure provides monitoring and decision support intended to increase the dependability and efficiency of the system. This comes at the cost of vulnerability to accidental failures and malicious attacks, due to the greater extent of virtual and physical interconnection. Any failure can propagate more quickly and extensively, and as such, the net result could be lowered reliability. In this paper, we describe metrics for assessment of two phases of smart grid operation: the duration before a failure occurs, and the recovery phase after an inevitable failure. The former is characterized by reliability, which we determine based on information about cascading failures. The latter is quantified using resilience, which can in turn facilitate comparison of recovery strategies. We illustrate the application of these metrics to a smart grid based on the IEEE 9-bus test system.

2015-04-30
Dondio, P., Longo, L..  2014.  Computing Trust as a Form of Presumptive Reasoning. Web Intelligence (WI) and Intelligent Agent Technologies (IAT), 2014 IEEE/WIC/ACM International Joint Conferences on. 2:274-281.

This study describes and evaluates a novel trust model for a range of collaborative applications. The model assumes that humans routinely choose to trust their peers by relying on few recurrent presumptions, which are domain independent and which form a recognisable trust expertise. We refer to these presumptions as trust schemes, a specialised version of Walton's argumentation schemes. Evidence is provided about the efficacy of trust schemes using a detailed experiment on an online community of 80,000 members. Results show how proposed trust schemes are more effective in trust computation when they are combined together and when their plausibility in the selected context is considered.

Nikolai, J., Yong Wang.  2014.  Hypervisor-based cloud intrusion detection system. Computing, Networking and Communications (ICNC), 2014 International Conference on. :989-993.

Shared resources are an essential part of cloud computing. Virtualization and multi-tenancy provide a number of advantages for increasing resource utilization and for providing on demand elasticity. However, these cloud features also raise many security concerns related to cloud computing resources. In this paper, we propose an architecture and approach for leveraging the virtualization technology at the core of cloud computing to perform intrusion detection security using hypervisor performance metrics. Through the use of virtual machine performance metrics gathered from hypervisors, such as packets transmitted/received, block device read/write requests, and CPU utilization, we demonstrate and verify that suspicious activities can be profiled without detailed knowledge of the operating system running within the virtual machines. The proposed hypervisor-based cloud intrusion detection system does not require additional software installed in virtual machines and has many advantages compared to host-based and network based intrusion detection systems which can complement these traditional approaches to intrusion detection.

Badis, H., Doyen, G., Khatoun, R..  2014.  Understanding botclouds from a system perspective: A principal component analysis. Network Operations and Management Symposium (NOMS), 2014 IEEE. :1-9.

Cloud computing is gaining ground and becoming one of the fast growing segments of the IT industry. However, if its numerous advantages are mainly used to support a legitimate activity, it is now exploited for a use it was not meant for: malicious users leverage its power and fast provisioning to turn it into an attack support. Botnets supporting DDoS attacks are among the greatest beneficiaries of this malicious use since they can be setup on demand and at very large scale without requiring a long dissemination phase nor an expensive deployment costs. For cloud service providers, preventing their infrastructure from being turned into an Attack as a Service delivery model is very challenging since it requires detecting threats at the source, in a highly dynamic and heterogeneous environment. In this paper, we present the result of an experiment campaign we performed in order to understand the operational behavior of a botcloud used for a DDoS attack. The originality of our work resides in the consideration of system metrics that, while never considered for state-of-the-art botnets detection, can be leveraged in the context of a cloud to enable a source based detection. Our study considers both attacks based on TCP-flood and UDP-storm and for each of them, we provide statistical results based on a principal component analysis, that highlight the recognizable behavior of a botcloud as compared to other legitimate workloads.

Hammi, B., Khatoun, R., Doyen, G..  2014.  A Factorial Space for a System-Based Detection of Botcloud Activity. New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on. :1-5.

Today, beyond a legitimate usage, the numerous advantages of cloud computing are exploited by attackers, and Botnets supporting DDoS attacks are among the greatest beneficiaries of this malicious use. Such a phenomena is a major issue since it strongly increases the power of distributed massive attacks while involving the responsibility of cloud service providers that do not own appropriate solutions. In this paper, we present an original approach that enables a source-based de- tection of UDP-flood DDoS attacks based on a distributed system behavior analysis. Based on a principal component analysis, our contribution consists in: (1) defining the involvement of system metrics in a botcoud's behavior, (2) showing the invariability of the factorial space that defines a botcloud activity and (3) among several legitimate activities, using this factorial space to enable a botcloud detection.

2015-03-03
Abbas, W., Koutsoukos, X..  2015.  Efficient Complete Coverage Through Heterogeneous Sensing Nodes. Wireless Communications Letters, IEEE. 4:14-17.

We investigate the coverage efficiency of a sensor network consisting of sensors with circular sensing footprints of different radii. The objective is to completely cover a region in an efficient manner through a controlled (or deterministic) deployment of such sensors. In particular, it is shown that when sensing nodes of two different radii are used for complete coverage, the coverage density is increased, and the sensing cost is significantly reduced as compared to the homogeneous case, in which all nodes have the same sensing radius. Configurations of heterogeneous disks of multiple radii to achieve efficient circle coverings are presented and analyzed.