Biblio

Found 3153 results

Filters: First Letter Of Last Name is B  [Clear All Filters]
2015-05-05
Crisan, D., Birke, R., Barabash, K., Cohen, R., Gusat, M..  2014.  Datacenter Applications in Virtualized Networks: A Cross-Layer Performance Study. Selected Areas in Communications, IEEE Journal on. 32:77-87.

Datacenter-based Cloud computing has induced new disruptive trends in networking, key among which is network virtualization. Software-Defined Networking overlays aim to improve the efficiency of the next generation multitenant datacenters. While early overlay prototypes are already available, they focus mainly on core functionality, with little being known yet about their impact on the system level performance. Using query completion time as our primary performance metric, we evaluate the overlay network impact on two representative datacenter workloads, Partition/Aggregate and 3-Tier. We measure how much performance is traded for overlay's benefits in manageability, security and policing. Finally, we aim to assist the datacenter architects by providing a detailed evaluation of the key overlay choices, all made possible by our accurate cross-layer hybrid/mesoscale simulation platform.
 

2015-05-06
Bhotto, M.Z.A., Antoniou, A..  2014.  Affine-Projection-Like Adaptive-Filtering Algorithms Using Gradient-Based Step Size. Circuits and Systems I: Regular Papers, IEEE Transactions on. 61:2048-2056.

A new class of affine-projection-like (APL) adaptive-filtering algorithms is proposed. The new algorithms are obtained by eliminating the constraint of forcing the a posteriori error vector to zero in the affine-projection algorithm proposed by Ozeki and Umeda. In this way, direct or indirect inversion of the input signal matrix is not required and, consequently, the amount of computation required per iteration can be reduced. In addition, as demonstrated by extensive simulation results, the proposed algorithms offer reduced steady-state misalignment in system-identification, channel-equalization, and acoustic-echo-cancelation applications. A mean-square-error analysis of the proposed APL algorithms is also carried out and its accuracy is verified by using simulation results in a system-identification application.

2015-05-05
Mewara, B., Bairwa, S., Gajrani, J..  2014.  Browser's defenses against reflected cross-site scripting attacks. Signal Propagation and Computer Technology (ICSPCT), 2014 International Conference on. :662-667.

Due to the frequent usage of online web applications for various day-to-day activities, web applications are becoming most suitable target for attackers. Cross-Site Scripting also known as XSS attack, one of the most prominent defacing web based attack which can lead to compromise of whole browser rather than just the actual web application, from which attack has originated. Securing web applications using server side solutions is not profitable as developers are not necessarily security aware. Therefore, browser vendors have tried to evolve client side filters to defend against these attacks. This paper shows that even the foremost prevailing XSS filters deployed by latest versions of most widely used web browsers do not provide appropriate defense. We evaluate three browsers - Internet Explorer 11, Google Chrome 32, and Mozilla Firefox 27 for reflected XSS attack against different type of vulnerabilities. We find that none of above is completely able to defend against all possible type of reflected XSS vulnerabilities. Further, we evaluate Firefox after installing an add-on named XSS-Me, which is widely used for testing the reflected XSS vulnerabilities. Experimental results show that this client side solution can shield against greater percentage of vulnerabilities than other browsers. It is witnessed to be more propitious if this add-on is integrated inside the browser instead being enforced as an extension.
 

2015-04-30
Biedermann, S., Ruppenthal, T., Katzenbeisser, S..  2014.  Data-centric phishing detection based on transparent virtualization technologies. Privacy, Security and Trust (PST), 2014 Twelfth Annual International Conference on. :215-223.

We propose a novel phishing detection architecture based on transparent virtualization technologies and isolation of the own components. The architecture can be deployed as a security extension for virtual machines (VMs) running in the cloud. It uses fine-grained VM introspection (VMI) to extract, filter and scale a color-based fingerprint of web pages which are processed by a browser from the VM's memory. By analyzing the human perceptual similarity between the fingerprints, the architecture can reveal and mitigate phishing attacks which are based on redirection to spoofed web pages and it can also detect “Man-in-the-Browser” (MitB) attacks. To the best of our knowledge, the architecture is the first anti-phishing solution leveraging virtualization technologies. We explain details about the design and the implementation and we show results of an evaluation with real-world data.

2015-05-06
Bayat-sarmadi, S., Mozaffari-Kermani, M., Reyhani-Masoleh, A..  2014.  Efficient and Concurrent Reliable Realization of the Secure Cryptographic SHA-3 Algorithm. Computer-Aided Design of Integrated Circuits and Systems, IEEE Transactions on. 33:1105-1109.

The secure hash algorithm (SHA)-3 has been selected in 2012 and will be used to provide security to any application which requires hashing, pseudo-random number generation, and integrity checking. This algorithm has been selected based on various benchmarks such as security, performance, and complexity. In this paper, in order to provide reliable architectures for this algorithm, an efficient concurrent error detection scheme for the selected SHA-3 algorithm, i.e., Keccak, is proposed. To the best of our knowledge, effective countermeasures for potential reliability issues in the hardware implementations of this algorithm have not been presented to date. In proposing the error detection approach, our aim is to have acceptable complexity and performance overheads while maintaining high error coverage. In this regard, we present a low-complexity recomputing with rotated operands-based scheme which is a step-forward toward reducing the hardware overhead of the proposed error detection approach. Moreover, we perform injection-based fault simulations and show that the error coverage of close to 100% is derived. Furthermore, we have designed the proposed scheme and through ASIC analysis, it is shown that acceptable complexity and performance overheads are reached. By utilizing the proposed high-performance concurrent error detection scheme, more reliable and robust hardware implementations for the newly-standardized SHA-3 are realized.
 

2015-05-05
Vantigodi, S., Babu, R.V..  2014.  Entropy constrained exemplar-based image inpainting. Signal Processing and Communications (SPCOM), 2014 International Conference on. :1-5.

Image inpainting is the process of filling the unwanted region in an image marked by the user. It is used for restoring old paintings and photographs, removal of red eyes from pictures, etc. In this paper, we propose an efficient inpainting algorithm which takes care of false edge propagation. We use the classical exemplar based technique to find out the priority term for each patch. To ensure that the edge content of the nearest neighbor patch found by minimizing L2 distance between patches, we impose an additional constraint that the entropy of the patches be similar. Entropy of the patch acts as a good measure of edge content. Additionally, we fill the image by considering overlapping patches to ensure smoothness in the output. We use structural similarity index as the measure of similarity between ground truth and inpainted image. The results of the proposed approach on a number of examples on real and synthetic images show the effectiveness of our algorithm in removing objects and thin scratches or text written on image. It is also shown that the proposed approach is robust to the shape of the manually selected target. Our results compare favorably to those obtained by existing techniques.
 

2015-05-06
Sanandaji, B.M., Bitar, E., Poolla, K., Vincent, T.L..  2014.  An abrupt change detection heuristic with applications to cyber data attacks on power systems. American Control Conference (ACC), 2014. :5056-5061.

We present an analysis of a heuristic for abrupt change detection of systems with bounded state variations. The proposed analysis is based on the Singular Value Decomposition (SVD) of a history matrix built from system observations. We show that monitoring the largest singular value of the history matrix can be used as a heuristic for detecting abrupt changes in the system outputs. We provide sufficient detectability conditions for the proposed heuristic. As an application, we consider detecting malicious cyber data attacks on power systems and test our proposed heuristic on the IEEE 39-bus testbed.
 

2015-05-05
Carroll, T.E., Crouse, M., Fulp, E.W., Berenhaut, K.S..  2014.  Analysis of network address shuffling as a moving target defense. Communications (ICC), 2014 IEEE International Conference on. :701-706.

Address shuffling is a type of moving target defense that prevents an attacker from reliably contacting a system by periodically remapping network addresses. Although limited testing has demonstrated it to be effective, little research has been conducted to examine the theoretical limits of address shuffling. As a result, it is difficult to understand how effective shuffling is and under what circumstances it is a viable moving target defense. This paper introduces probabilistic models that can provide insight into the performance of address shuffling. These models quantify the probability of attacker success in terms of network size, quantity of addresses scanned, quantity of vulnerable systems, and the frequency of shuffling. Theoretical analysis shows that shuffling is an acceptable defense if there is a small population of vulnerable systems within a large network address space, however shuffling has a cost for legitimate users. These results will also be shown empirically using simulation and actual traffic traces.
 

2015-05-06
Butt, M.I.A..  2014.  BIOS integrity an advanced persistent threat. Information Assurance and Cyber Security (CIACS), 2014 Conference on. :47-50.

Basic Input Output System (BIOS) is the most important component of a computer system by virtue of its role i.e., it holds the code which is executed at the time of startup. It is considered as the trusted computing base, and its integrity is extremely important for smooth functioning of the system. On the contrary, BIOS of new computer systems (servers, laptops, desktops, network devices, and other embedded systems) can be easily upgraded using a flash or capsule mechanism which can add new vulnerabilities either through malicious code, or by accidental incidents, and deliberate attack. The recent attack on Iranian Nuclear Power Plant (Stuxnet) [1:2] is an example of advanced persistent attack. This attack vector adds a new dimension into the information security (IS) spectrum, which needs to be guarded by implementing a holistic approach employed at enterprise level. Malicious BIOS upgrades can also cause denial of service, stealing of information or addition of new backdoors which can be exploited by attackers for causing business loss, passive eaves dropping or total destruction of system without knowledge of user. To address this challenge a capability for verification of BIOS integrity needs to be developed and due diligence must be observed for proactive resolution of the issue. This paper explains the BIOS Integrity threats and presents a prevention strategy for effective and proactive resolution.

2015-04-30
Baofeng Wu, Qingfang Jin, Zhuojun Liu, Dongdai Lin.  2014.  Constructing Boolean functions with potentially optimal algebraic immunity based on additive decompositions of finite fields (extended abstract). Information Theory (ISIT), 2014 IEEE International Symposium on. :1361-1365.

We propose a general approach to construct cryptographic significant Boolean functions of (r + 1)m variables based on the additive decomposition F2rm × F2m of the finite field F2(r+1)m, where r ≥ 1 is odd and m ≥ 3. A class of unbalanced functions is constructed first via this approach, which coincides with a variant of the unbalanced class of generalized Tu-Deng functions in the case r = 1. Functions belonging to this class have high algebraic degree, but their algebraic immunity does not exceed m, which is impossible to be optimal when r > 1. By modifying these unbalanced functions, we obtain a class of balanced functions which have optimal algebraic degree and high nonlinearity (shown by a lower bound we prove). These functions have optimal algebraic immunity provided a combinatorial conjecture on binary strings which generalizes the Tu-Deng conjecture is true. Computer investigations show that, at least for small values of number of variables, functions from this class also behave well against fast algebraic attacks.

2015-05-01
Baofeng Wu, Qingfang Jin, Zhuojun Liu, Dongdai Lin.  2014.  Constructing Boolean functions with potentially optimal algebraic immunity based on additive decompositions of finite fields (extended abstract). Information Theory (ISIT), 2014 IEEE International Symposium on. :1361-1365.

We propose a general approach to construct cryptographic significant Boolean functions of (r + 1)m variables based on the additive decomposition F2rm × F2m of the finite field F2(r+1)m, where r ≥ 1 is odd and m ≥ 3. A class of unbalanced functions is constructed first via this approach, which coincides with a variant of the unbalanced class of generalized Tu-Deng functions in the case r = 1. Functions belonging to this class have high algebraic degree, but their algebraic immunity does not exceed m, which is impossible to be optimal when r > 1. By modifying these unbalanced functions, we obtain a class of balanced functions which have optimal algebraic degree and high nonlinearity (shown by a lower bound we prove). These functions have optimal algebraic immunity provided a combinatorial conjecture on binary strings which generalizes the Tu-Deng conjecture is true. Computer investigations show that, at least for small values of number of variables, functions from this class also behave well against fast algebraic attacks.

2015-05-06
Kafai, M., Eshghi, K., Bhanu, B..  2014.  Discrete Cosine Transform Locality-Sensitive Hashes for Face Retrieval. Multimedia, IEEE Transactions on. 16:1090-1103.

Descriptors such as local binary patterns perform well for face recognition. Searching large databases using such descriptors has been problematic due to the cost of the linear search, and the inadequate performance of existing indexing methods. We present Discrete Cosine Transform (DCT) hashing for creating index structures for face descriptors. Hashes play the role of keywords: an index is created, and queried to find the images most similar to the query image. Common hash suppression is used to improve retrieval efficiency and accuracy. Results are shown on a combination of six publicly available face databases (LFW, FERET, FEI, BioID, Multi-PIE, and RaFD). It is shown that DCT hashing has significantly better retrieval accuracy and it is more efficient compared to other popular state-of-the-art hash algorithms.
 

Nicanfar, H., Jokar, P., Beznosov, K., Leung, V.C.M..  2014.  Efficient Authentication and Key Management Mechanisms for Smart Grid Communications. Systems Journal, IEEE. 8:629-640.

A smart grid (SG) consists of many subsystems and networks, all working together as a system of systems, many of which are vulnerable and can be attacked remotely. Therefore, security has been identified as one of the most challenging topics in SG development, and designing a mutual authentication scheme and a key management protocol is the first important step. This paper proposes an efficient scheme that mutually authenticates a smart meter of a home area network and an authentication server in SG by utilizing an initial password, by decreasing the number of steps in the secure remote password protocol from five to three and the number of exchanged packets from four to three. Furthermore, we propose an efficient key management protocol based on our enhanced identity-based cryptography for secure SG communications using the public key infrastructure. Our proposed mechanisms are capable of preventing various attacks while reducing the management overhead. The improved efficiency for key management is realized by periodically refreshing all public/private key pairs as well as any multicast keys in all the nodes using only one newly generated function broadcasted by the key generator entity. Security and performance analyses are presented to demonstrate these desirable attributes.

2015-05-04
Barbosa de Carvalho, M., Pereira Esteves, R., da Cunha Rodrigues, G., Cassales Marquezan, C., Zambenedetti Granville, L., Rockenbach Tarouco, L.M..  2014.  Efficient configuration of monitoring slices for cloud platform administrators. Computers and Communication (ISCC), 2014 IEEE Symposium on. :1-7.

Monitoring is an important issue in cloud environments because it assures that acquired cloud slices attend the user's expectations. However, these environments are multitenant and dynamic, requiring automation techniques to offload cloud administrators. In a previous work, we proposed FlexACMS: a framework to automate monitoring configuration related to cloud slices using multiple monitoring solutions. In this work, we enhanced FlexACMS to allow dynamic and automatic attribution of monitoring configuration tasks to servers without administrator intervention, which was not available in previous version. FlexACMS also considers the monitoring server load when attributing configuration tasks, which allows load balancing between monitoring servers. The evaluation showed that enhancements reduced FlexACMS response time up to 60% in comparison to previous version. The scalability evaluation of enhanced version demonstrated the feasibility of our approach in large scale cloud environments.
 

2015-04-30
Saoud, Z., Faci, N., Maamar, Z., Benslimane, D..  2014.  A Fuzzy Clustering-Based Credibility Model for Trust Assessment in a Service-Oriented Architecture. WETICE Conference (WETICE), 2014 IEEE 23rd International. :56-61.

This paper presents a credibility model to assess trust of Web services. The model relies on consumers' ratings whose accuracy can be questioned due to different biases. A category of consumers known as strict are usually excluded from the process of reaching a majority consensus. We demonstrated that this exclusion should not be. The proposed model reduces the gap between these consumers' ratings and the current majority rating. Fuzzy clustering is used to compute consumers' credibility. To validate this model a set of experiments are carried out.

2015-05-06
Talamo, M., Barchiesi, M.L., Merella, D., Schunck, C.H..  2014.  Global convergence in digital identity and attribute management: Emerging needs for standardization. ITU Kaleidoscope Academic Conference: Living in a converged world - Impossible without standards?, Proceedings of the 2014. :15-21.

In a converging world, where borders between countries are surpassed in the digital environment, it is necessary to develop systems that effectively replace the recognition “vis-a-vis” with digital means of recognizing and identifying entities and people. In this work we summarize the current standardization efforts in the area of digital identity management. We identify a number of open challenges that need to be addressed in the near future to ensure the interoperability and usability of digital identity management services in an efficient and privacy maintaining international framework. These challenges for standardization include: the management of identifiers for digital identities at the global level; attribute management including attribute format, structure, and assurance; procedures and protocols to link attributes to digital identities. Attention is drawn to key elements that should be considered in addressing these issues through standardization.

2015-05-05
Juzi Zhao, Subramaniam, S., Brandt-Pearce, M..  2014.  Intradomain and interdomain QoT-aware RWA for translucent optical networks. Optical Communications and Networking, IEEE/OSA Journal of. 6:536-548.

Physical impairments in long-haul optical networks mandate that optical signals be regenerated within the (so-called translucent) network. Being expensive devices, regenerators are expected to be allocated sparsely and must be judiciously utilized. Next-generation optical-transport networks will include multiple domains with diverse technologies, protocols, granularities, and carriers. Because of confidentiality and scalability concerns, the scope of network-state information (e.g., topology, wavelength availability) may be limited to within a domain. In such networks, the problem of routing and wavelength assignment (RWA) aims to find an adequate route and wavelength(s) for lightpaths carrying end-to-end service demands. Some state information may have to be explicitly exchanged among the domains to facilitate the RWA process. The challenge is to determine which information is the most critical and make a wise choice for the path and wavelength(s) using the limited information. Recently, a framework for multidomain path computation called backward-recursive path-computation (BRPC) was standardized by the Internet Engineering Task Force. In this paper, we consider the RWA problem for connections within a single domain and interdomain connections so that the quality of transmission (QoT) requirement of each connection is satisfied, and the network-level performance metric of blocking probability is minimized. Cross-layer heuristics that are based on dynamic programming to effectively allocate the sparse regenerators are developed, and extensive simulation results are presented to demonstrate their effectiveness.

 

2015-04-30
Severi, S., Sottile, F., Abreu, G., Pastrone, C., Spirito, M., Berens, F..  2014.  M2M technologies: Enablers for a pervasive Internet of Things. Networks and Communications (EuCNC), 2014 European Conference on. :1-5.

We survey the state-of-the-art on the Internet-of-Things (IoT) from a wireless communications point of view, as a result of the European FP7 project BUTLER which has its focus on pervasiveness, context-awareness and security for IoT. In particular, we describe the efforts to develop so-called (wireless) enabling technologies, aimed at circumventing the many challenges involved in extending the current set of domains (“verticals”) of IoT applications towards a “horizontal” (i.e. integrated) vision of the IoT. We start by illustrating current research effort in machine-to-machine (M2M), which is mainly focused on vertical domains, and we discuss some of them in details, depicting then the necessary horizontal vision for the future intelligent daily routine (“Smart Life”). We then describe the technical features of the most relevant heterogeneous communications technologies on which the IoT relies, under the light of the on-going M2M service layer standardization. Finally we identify and present the key aspects, within three major cross-vertical categories, under which M2M technologies can function as enablers for the horizontal vision of the IoT.

2015-05-01
Hammoud, R.I., Sahin, C.S., Blasch, E.P., Rhodes, B.J..  2014.  Multi-source Multi-modal Activity Recognition in Aerial Video Surveillance. Computer Vision and Pattern Recognition Workshops (CVPRW), 2014 IEEE Conference on. :237-244.

Recognizing activities in wide aerial/overhead imagery remains a challenging problem due in part to low-resolution video and cluttered scenes with a large number of moving objects. In the context of this research, we deal with two un-synchronized data sources collected in real-world operating scenarios: full-motion videos (FMV) and analyst call-outs (ACO) in the form of chat messages (voice-to-text) made by a human watching the streamed FMV from an aerial platform. We present a multi-source multi-modal activity/event recognition system for surveillance applications, consisting of: (1) detecting and tracking multiple dynamic targets from a moving platform, (2) representing FMV target tracks and chat messages as graphs of attributes, (3) associating FMV tracks and chat messages using a probabilistic graph-based matching approach, and (4) detecting spatial-temporal activity boundaries. We also present an activity pattern learning framework which uses the multi-source associated data as training to index a large archive of FMV videos. Finally, we describe a multi-intelligence user interface for querying an index of activities of interest (AOIs) by movement type and geo-location, and for playing-back a summary of associated text (ACO) and activity video segments of targets-of-interest (TOIs) (in both pixel and geo-coordinates). Such tools help the end-user to quickly search, browse, and prepare mission reports from multi-source data.

2015-05-05
Dressler, J., Bowen, C.L., Moody, W., Koepke, J..  2014.  Operational data classes for establishing situational awareness in cyberspace. Cyber Conflict (CyCon 2014), 2014 6th International Conference On. :175-186.

The United States, including the Department of Defense, relies heavily on information systems and networking technologies to efficiently conduct a wide variety of missions across the globe. With the ever-increasing rate of cyber attacks, this dependency places the nation at risk of a loss of confidentiality, integrity, and availability of its critical information resources; degrading its ability to complete the mission. In this paper, we introduce the operational data classes for establishing situational awareness in cyberspace. A system effectively using our key information components will be able to provide the nation's leadership timely and accurate information to gain an understanding of the operational cyber environment to enable strategic, operational, and tactical decision-making. In doing so, we present, define and provide examples of our key classes of operational data for cyber situational awareness and present a hypothetical case study demonstrating how they must be consolidated to provide a clear and relevant picture to a commander. In addition, current organizational and technical challenges are discussed, and areas for future research are addressed.
 

2015-05-01
Saavedra Benitez, Y.I., Ben-Othman, J., Claude, J.-P..  2014.  Performance evaluation of security mechanisms in RAOLSR protocol for Wireless Mesh Networks. Communications (ICC), 2014 IEEE International Conference on. :1808-1812.

In this paper, we have proposed the IBE-RAOLSR and ECDSA-RAOLSR protocols for WMNs (Wireless Mesh Networks), which contributes to security routing protocols. We have implemented the IBE (Identity Based Encryption) and ECDSA (Elliptic Curve Digital Signature Algorithm) methods to secure messages in RAOLSR (Radio Aware Optimized Link State Routing), namely TC (Topology Control) and Hello messages. We then compare the ECDSA-based RAOLSR with IBE-based RAOLSR protocols. This study shows the great benefits of the IBE technique in securing RAOLSR protocol for WMNs. Through extensive ns-3 (Network Simulator-3) simulations, results have shown that the IBE-RAOLSR outperforms the ECDSA-RAOLSR in terms of overhead and delay. Simulation results show that the utilize of the IBE-based RAOLSR provides a greater level of security with light overhead.

2015-05-05
Kampanakis, P., Perros, H., Beyene, T..  2014.  SDN-based solutions for Moving Target Defense network protection. A World of Wireless, Mobile and Multimedia Networks (WoWMoM), 2014 IEEE 15th International Symposium on. :1-6.

Software-Defined Networking (SDN) allows network capabilities and services to be managed through a central control point. Moving Target Defense (MTD) on the other hand, introduces a constantly adapting environment in order to delay or prevent attacks on a system. MTD is a use case where SDN can be leveraged in order to provide attack surface obfuscation. In this paper, we investigate how SDN can be used in some network-based MTD techniques. We first describe the advantages and disadvantages of these techniques, the potential countermeasures attackers could take to circumvent them, and the overhead of implementing MTD using SDN. Subsequently, we study the performance of the SDN-based MTD methods using Cisco's One Platform Kit and we show that they significantly increase the attacker's overheads.

2015-05-06
Barclay, C..  2014.  Sustainable security advantage in a changing environment: The Cybersecurity Capability Maturity Model (CM2). ITU Kaleidoscope Academic Conference: Living in a converged world - Impossible without standards?, Proceedings of the 2014. :275-282.

With the rapid advancement in technology and the growing complexities in the interaction of these technologies and networks, it is even more important for countries and organizations to gain sustainable security advantage. Security advantage refers to the ability to manage and respond to threats and vulnerabilities with a proactive security posture. This is accomplished through effectively planning, managing, responding to and recovering from threats and vulnerabilities. However not many organizations and even countries, especially in the developing world, have been able to equip themselves with the necessary and sufficient know-how or ability to integrate knowledge and capabilities to achieve security advantage within their environment. Having a structured set of requirements or indicators to aid in progressively attaining different levels of maturity and capabilities is one important method to determine the state of cybersecurity readiness. The research introduces the Cybersecurity Capability Maturity Model (CM2), a 6-step process of progressive development of cybersecurity maturity and knowledge integration that ranges from a state of limited awareness and application of security controls to pervasive optimization of the protection of critical assets.
 

2015-04-30
Barclay, C..  2014.  Sustainable security advantage in a changing environment: The Cybersecurity Capability Maturity Model (CM2). ITU Kaleidoscope Academic Conference: Living in a converged world - Impossible without standards?, Proceedings of the 2014. :275-282.

With the rapid advancement in technology and the growing complexities in the interaction of these technologies and networks, it is even more important for countries and organizations to gain sustainable security advantage. Security advantage refers to the ability to manage and respond to threats and vulnerabilities with a proactive security posture. This is accomplished through effectively planning, managing, responding to and recovering from threats and vulnerabilities. However not many organizations and even countries, especially in the developing world, have been able to equip themselves with the necessary and sufficient know-how or ability to integrate knowledge and capabilities to achieve security advantage within their environment. Having a structured set of requirements or indicators to aid in progressively attaining different levels of maturity and capabilities is one important method to determine the state of cybersecurity readiness. The research introduces the Cybersecurity Capability Maturity Model (CM2), a 6-step process of progressive development of cybersecurity maturity and knowledge integration that ranges from a state of limited awareness and application of security controls to pervasive optimization of the protection of critical assets.

2015-05-05
Aydin, A., Alkhalaf, M., Bultan, T..  2014.  Automated Test Generation from Vulnerability Signatures. Software Testing, Verification and Validation (ICST), 2014 IEEE Seventh International Conference on. :193-202.

Web applications need to validate and sanitize user inputs in order to avoid attacks such as Cross Site Scripting (XSS) and SQL Injection. Writing string manipulation code for input validation and sanitization is an error-prone process leading to many vulnerabilities in real-world web applications. Automata-based static string analysis techniques can be used to automatically compute vulnerability signatures (represented as automata) that characterize all the inputs that can exploit a vulnerability. However, there are several factors that limit the applicability of static string analysis techniques in general: 1) undesirability of static string analysis requires the use of approximations leading to false positives, 2) static string analysis tools do not handle all string operations, 3) dynamic nature of the scripting languages makes static analysis difficult. In this paper, we show that vulnerability signatures computed for deliberately insecure web applications (developed for demonstrating different types of vulnerabilities) can be used to generate test cases for other applications. Given a vulnerability signature represented as an automaton, we present algorithms for test case generation based on state, transition, and path coverage. These automatically generated test cases can be used to test applications that are not analyzable statically, and to discover attack strings that demonstrate how the vulnerabilities can be exploited.