Biblio
Firewalls and Demilitarized Zones (DMZs) are two mechanisms that have been widely employed to secure enterprise networks. Despite this, their security effectiveness has not been systematically quantified. In this paper, we make a first step towards filling this void by presenting a representational framework for investigating their security effectiveness in protecting enterprise networks. Through simulation experiments, we draw useful insights into the security effectiveness of firewalls and DMZs. To the best of our knowledge, these insights were not reported in the literature until now.
Security assurance is the confidence that a system meets its security requirements based on specific evidences that an assurance technique provide. The notion of measuring security is complex and tricky. Existing approaches either (1) consider one aspect of assurance, like security requirements fulfillment, or threat/vulnerability existence, or (2) do not consider the relevance of the different security requirements to the evaluated application context. Furthermore, they are mostly qualitative in nature and are heavily based on manual processing, which make them costly and time consuming. Therefore, they are not widely used and applied, especially by small and medium-sized enterprises (SME), which constitute the backbone of the Norwegian economy. In this paper, we propose a quantification method that aims at evaluating security assurance of systems by measuring (1) the level of confidence that the mechanisms fulfilling security requirements are present and (2) the vulnerabilities associated with possible security threats are absent. Additionally, an assurance evaluation process is proposed. Two case studies applying our method are presented. The case studies use our assurance method to evaluate the security level of two REST APIs developed by Statistics Norway, where one of the authors is employed. Analysis shows that the API with the most security mechanisms implemented got a slightly higher security assurance score. Security requirement relevance and vulnerability impact played a role in the overall scores.
System administrators are slowly coming to accept that nearly all systems are vulnerable and many should be assumed to be compromised. Rather than preventing all vulnerabilities in complex systems, the approach is changing to protecting systems under the assumption that they are already under attack.
Administrators do not know all the latent vulnerabilities in the systems they are charged with protecting. This work builds on prior approaches that assume more a priori knowledge. [5]. Additionally, prior research does not necessarily guide administrators to gracefully degrade systems in response to threats [4]. Sophisticated attackers with high levels of resources, like advanced persistent threats (APTs), might use zero day exploits against novel vulnerabilities or be slow and stealthy to evade initial lines of detection.
However, defenders often have some knowledge of where attackers are. Additionally, it is possible to reasonably bound attacker resourcing. Exploits have a cost to create [1], and even the most sophisticated attacks use limited number of zero day exploits [3].
However, defenders need a way to reason about and react to the impact of an attacker with existing presence in a system. It may not be possible to maintain one hundred percent of the system's original utility; instead, the attacker might need to gracefully degrade the system, trading off some functional utility to keep an attacker away from the most critical functionality.
We propose a method to "think like an attacker" to evaluate architectures and alternatives in response to knowledge of attacker presence. For each considered alternative architecture, our approach determines the types of exploits an attacker would need to achieve particular attacks using the Datalog declarative logic programming language in a fashion that draws adapts others' prior work [2][4]. With knowledge of how difficult particular exploits are to create, we can approximate the cost to an attacker of a particular attack trace. A bounded search of traces within a limited cost provides a set of hypothetical attacks for a given architecture. These attacks have varying impacts to the system's ability to achieve its functions. Using this knowledge, our approach outputs an architectural alternative that optimally balances keeping an attacker away from critical functionality while preserving that functionality. In the process, it provides evidence in the form of hypothetical attack traces that can be used to explain the reasoning.
This thinking enables a defender to reason about how potential defensive tactics could close off avenues of attack or perhaps enable an ongoing attack. By thinking at the level of architecture, we avoid assumptions of knowledge of specific vulnerabilities. This enables reasoning in a highly uncertain domain.
We applied this to several small systems at varying levels of abstraction. These systems were chosen as exemplars of various "best practices" to see if the approach could quantitatively validate the underpinnings of general rules of thumb like using perimeter security or trading off resilience for security. Ultimately, our approach successfully places architectural components in places that correspond with current best practices and would be reasonable to system architects. In the process of applying the approach at different levels of abstraction, we were able to fine tune our understanding attacker movement through systems in a way that provides security-appropriate architectures despite poor knowledge of latent vulnerabilities; the result of the fine-tuning is a more granular way to understand and evaluate attacker movement in systems.
Future work will explore ways to enhance performance to this approach so it can provide real time planning to gracefully degrade systems as attacker knowledge is discovered. Additionally, we plan to explore ways to enhance expressiveness to the approach to address additional security related concerns; these might include aspects like timing and further levels of uncertainty.
In this work, a quantum design for the Simplified-Advanced Encryption Standard (S-AES) algorithm is presented. Also, a quantum Grover attack is modeled on the proposed quantum S-AES. First, quantum circuits for the main components of S-AES in the finite field F2[x]/(x4 + x + 1), are constructed. Then, the constructed circuits are put together to form a quantum version of S-AES. A C-NOT synthesis is used to decompose some of the functions to reduce the number of the needed qubits. The quantum S-AES is integrated into a black-box queried by Grover's algorithm. A new approach is proposed to uniquely recover the secret key when Grover attack is applied. The entire work is simulated and tested on a quantum mechanics simulator. The complexity analysis shows that a block cipher can be designed as a quantum circuit with a polynomial cost. In addition, the secret key is recovered in quadratic speedup as promised by Grover's algorithm.
The quantitative security effectiveness analysis is a difficult problem for the research of network address randomization techniques. In this paper, a system model and an attack model are proposed based on general attacks' attack processes and network address randomization's technical principle. Based on the models, the network address randomization's security effectiveness is quantitatively analyzed from the perspective of the attacker's attack time and attack cost in both static network address and network address randomization cases. The results of the analysis show that the security effectiveness of network address randomization is determined by the randomization frequency, the randomization space, the states of hosts in the target network, and the capabilities of the attacker.
Mobile Ad hoc Networks (MANETs) always bring challenges to the designers in terms of its security deployment due to their dynamic and infrastructure less nature. In the past few years different researchers have proposed different solutions for providing security to MANETs. In most of the cases however, the solution prevents either a particular attack or provides security at the cost of sacrificing the QoS. In this paper we introduce a model that deploys security in MANETs and takes care of the Quality of Services issues to some extent. We have adopted the concept of analyzing the behavior of the node as we believe that if nodes behave properly and in a coordinated fashion, the insecurity level goes drastically down. Our methodology gives the advantage of using this approach
Distributed Denial-of-Service (DDoS) attacks are increasing in frequency and volume on the Internet, and there is evidence that cyber-criminals are turning to Internet-of-Things (IoT) devices such as cameras and vending machines as easy launchpads for large-scale attacks. This paper quantifies the capability of consumer IoT devices to participate in reflective DDoS attacks. We first show that household devices can be exposed to Internet reflection even if they are secured behind home gateways. We then evaluate eight household devices available on the market today, including lightbulbs, webcams, and printers, and experimentally profile their reflective capability, amplification factor, duration, and intensity rate for TCP, SNMP, and SSDP based attacks. Lastly, we demonstrate reflection attacks in a real-world setting involving three IoT-equipped smart-homes, emphasising the imminent need to address this problem before it becomes widespread.
As the use of cloud computing and autonomous computing increases, integrity verification of the software stack used in a system becomes a critical issue. In this paper, we analyze the internal behavior of IMA (Integrity Measurement Architecture), one of the most well-known integrity verification frameworks employed in the Linux kernel. For integrity verification, IMA measures all executables and their configuration files in a trusty manner using TPM (Trust Platform Module). Our analysis reveals that there are two obstacles in IMA, measurement overhead and nondeterminism. To address these problems, we propose two novel techniques, called batch extend and core measurement. The former is a technique that accumulates the measured values of executables/files and extends them into TPM in a batch fashion. The second technique measures some specified executables/files only so that it verifies the core integrity of a system in which a user or a remote party is interested. Real implementation based evaluation shows that our proposal can reduce the booting time from 122 to 23 seconds, while supporting the same integrity verification capability of the default IMA policy.
IT system risk assessments are indispensable due to increasing cyber threats within our ever-growing IT systems. Moreover, laws and regulations urge organizations to conduct risk assessments regularly. Even though there exist several risk management frameworks and methodologies, they are in general high level, not defining the risk metrics, risk metrics values and the detailed risk assessment formulas for different risk views. To address this need, we define a novel risk assessment methodology specific to IT systems. Our model is quantitative, both asset and vulnerability centric and defines low and high level risk metrics. High level risk metrics are defined in two general categories; base and attack graph-based. In our paper, we provide a detailed explanation of formulations in each category and make our implemented software publicly available for those who are interested in applying the proposed methodology to their IT systems.
Protection of information achieves keeping confidentiality, integrity, and availability of the data. These features are essential for the proper operation of modern industrial technologies, like Smart Grid. The complex grid system integrates many electronic devices that provide an efficient way of exploiting the power systems but cause many problems due to their vulnerabilities to attacks. The aim of the work is to propose a solution to the privacy problem in Smart Grid communication network between the customers and Control center. It consists in using the relatively new cryptographic task - quantum key distribution (QKD). The solution is based on choosing an appropriate quantum key distribution method out of all the conventional ones by performing an assessment in terms of several parameters. The parameters are: key rate, operating distances, resources, and trustworthiness of the devices involved. Accordingly, we discuss an answer to the privacy problem of the SG network with regard to both security and resource economy.
When transferring sensitive data to a non-trusted party, end-users require that the data be kept private. Mobile and IoT application developers want to leverage the sensitive data to provide better user experience and intelligent services. Unfortunately, existing programming abstractions make it impossible to reconcile these two seemingly conflicting objectives. In this paper, we present a novel programming mechanism for distributed managed execution environments that hides sensitive user data, while enabling developers to build powerful and intelligent applications, driven by the properties of the sensitive data. Specifically, the sensitive data is never revealed to clients, being protected by the runtime system. Our abstractions provide declarative and configurable data query interfaces, enforced by a lightweight distributed runtime system. Developers define when and how clients can query the sensitive data's properties (i.e., how long the data remains accessible, how many times its properties can be queried, which data query methods apply, etc.). Based on our evaluation, we argue that integrating our novel mechanism with the Java Virtual Machine (JVM) can address some of the most pertinent privacy problems of IoT and mobile applications.
In this paper, a novel quantum encryption algorithm for color image is proposed based on multiple discrete chaotic systems. The proposed quantum image encryption algorithm utilize the quantum controlled-NOT image generated by chaotic logistic map, asymmetric tent map and logistic Chebyshev map to control the XOR operation in the encryption process. Experiment results and analysis show that the proposed algorithm has high efficiency and security against differential and statistical attacks.
This paper presents a new technique for providing the analysis and comparison of wiretap codes in the small blocklength regime over the binary erasure wiretap channel. A major result is the development of Monte Carlo strategies for quantifying a code's equivocation, which mirrors techniques used to analyze forward error correcting codes. For this paper, we limit our analysis to coset-based wiretap codes, and give preferred strategies for calculating and/or estimating the equivocation in order of preference. We also make several comparisons of different code families. Our results indicate that there are security advantages to using algebraic codes for applications that require small to medium blocklengths.
One of the key objectives of distributed denial of service (DDoS) attack on the smart grid advanced metering infrastructure is to threaten the availability of end user's metering data. This will surely disrupt the smooth operations of the grid and third party operators who need this data for billing and other grid control purposes. In previous work, we proposed a cloud-based Openflow firewall for mitigation against DDoS attack in a smart grid AMI. In this paper, PRISM model checker is used to perform a probabilistic best-and worst-case analysis of the firewall with regard to DDoS attack success under different firewall detection probabilities ranging from zero to 1. The results from this quantitative analysis can be useful in determining the extent the DDoS attack can undermine the correctness and performance of the firewall. In addition, the study can also be helpful in knowing the extent the firewall can be improved by applying the knowledge derived from the worst-case performance of the firewall.
Summary form only given. Strong light-matter coupling has been recently successfully explored in the GHz and THz [1] range with on-chip platforms. New and intriguing quantum optical phenomena have been predicted in the ultrastrong coupling regime [2], when the coupling strength Ω becomes comparable to the unperturbed frequency of the system ω. We recently proposed a new experimental platform where we couple the inter-Landau level transition of an high-mobility 2DEG to the highly subwavelength photonic mode of an LC meta-atom [3] showing very large Ω/ωc = 0.87. Our system benefits from the collective enhancement of the light-matter coupling which comes from the scaling of the coupling Ω ∝ √n, were n is the number of optically active electrons. In our previous experiments [3] and in literature [4] this number varies from 104-103 electrons per meta-atom. We now engineer a new cavity, resonant at 290 GHz, with an extremely reduced effective mode surface Seff = 4 × 10-14 m2 (FE simulations, CST), yielding large field enhancements above 1500 and allowing to enter the few (\textbackslashtextless;100) electron regime. It consist of a complementary metasurface with two very sharp metallic tips separated by a 60 nm gap (Fig.1(a, b)) on top of a single triangular quantum well. THz-TDS transmission experiments as a function of the applied magnetic field reveal strong anticrossing of the cavity mode with linear cyclotron dispersion. Measurements for arrays of only 12 cavities are reported in Fig.1(c). On the top horizontal axis we report the number of electrons occupying the topmost Landau level as a function of the magnetic field. At the anticrossing field of B=0.73 T we measure approximately 60 electrons ultra strongly coupled (Ω/ω- \textbackslashtextbar\textbackslashtextbar