Biblio
Due to the frequent usage of online web applications for various day-to-day activities, web applications are becoming most suitable target for attackers. Cross-Site Scripting also known as XSS attack, one of the most prominent defacing web based attack which can lead to compromise of whole browser rather than just the actual web application, from which attack has originated. Securing web applications using server side solutions is not profitable as developers are not necessarily security aware. Therefore, browser vendors have tried to evolve client side filters to defend against these attacks. This paper shows that even the foremost prevailing XSS filters deployed by latest versions of most widely used web browsers do not provide appropriate defense. We evaluate three browsers - Internet Explorer 11, Google Chrome 32, and Mozilla Firefox 27 for reflected XSS attack against different type of vulnerabilities. We find that none of above is completely able to defend against all possible type of reflected XSS vulnerabilities. Further, we evaluate Firefox after installing an add-on named XSS-Me, which is widely used for testing the reflected XSS vulnerabilities. Experimental results show that this client side solution can shield against greater percentage of vulnerabilities than other browsers. It is witnessed to be more propitious if this add-on is integrated inside the browser instead being enforced as an extension.
An application of two Cyber-Physical System (CPS) security countermeasures - Intelligent Checker (IC) and Cross-correlator - for enhancing CPS safety and achieving required CPS safety integrity level is presented. ICs are smart sensors aimed at detecting attacks in CPS and alerting the human operators. Cross-correlator is an anomaly detection technique for detecting deception attacks. We show how ICs could be implemented at three different CPS safety protection layers to maintain CPS in a safe state. In addition, we combine ICs with the cross-correlator technique to assure high probability of failure detection. Performance simulations show that a combination of these two security countermeasures is effective in detecting and mitigating CPS failures, including catastrophic failures.
The secure hash algorithm (SHA)-3 has been selected in 2012 and will be used to provide security to any application which requires hashing, pseudo-random number generation, and integrity checking. This algorithm has been selected based on various benchmarks such as security, performance, and complexity. In this paper, in order to provide reliable architectures for this algorithm, an efficient concurrent error detection scheme for the selected SHA-3 algorithm, i.e., Keccak, is proposed. To the best of our knowledge, effective countermeasures for potential reliability issues in the hardware implementations of this algorithm have not been presented to date. In proposing the error detection approach, our aim is to have acceptable complexity and performance overheads while maintaining high error coverage. In this regard, we present a low-complexity recomputing with rotated operands-based scheme which is a step-forward toward reducing the hardware overhead of the proposed error detection approach. Moreover, we perform injection-based fault simulations and show that the error coverage of close to 100% is derived. Furthermore, we have designed the proposed scheme and through ASIC analysis, it is shown that acceptable complexity and performance overheads are reached. By utilizing the proposed high-performance concurrent error detection scheme, more reliable and robust hardware implementations for the newly-standardized SHA-3 are realized.
Internet into our physical world and making it present everywhere. This evolution is also raising challenges in issues such as privacy, and security. For that reason, this work is focused on the integration and lightweight adaptation of existing authentication protocols, which are able also to offer authorization and access control functionalities. In particular, this work is focused on the Extensible Authentication Protocol (EAP). EAP is widely used protocol for access control in local area networks such Wireless (802.11) and wired (802.3). This work presents an integration of the EAP frame into IEEE 802.15.4 frames, demonstrating that EAP protocol and some of its mechanisms are feasible to be applied in constrained devices, such as the devices that are populating the IoT networks.
Information fusion deals with the integration and merging of data and information from multiple (heterogeneous) sources. In many cases, the information that needs to be fused has security classification. The result of the fusion process is then by necessity restricted with the strictest information security classification of the inputs. This has severe drawbacks and limits the possible dissemination of the fusion results. It leads to decreased situational awareness: the organization knows information that would enable a better situation picture, but since parts of the information is restricted, it is not possible to distribute the most correct situational information. In this paper, we take steps towards defining fusion and data mining processes that can be used even when all the underlying data that was used cannot be disseminated. The method we propose here could be used to produce a classifier where all the sensitive information has been removed and where it can be shown that an antagonist cannot even in principle obtain knowledge about the classified information by using the classifier or situation picture.
Revolution in the field of technology leads to the development of cloud computing which delivers on-demand and easy access to the large shared pools of online stored data, softwares and applications. It has changed the way of utilizing the IT resources but at the compromised cost of security breaches as well such as phishing attacks, impersonation, lack of confidentiality and integrity. Thus this research work deals with the core problem of providing absolute security to the mobile consumers of public cloud to improve the mobility of user's, accessing data stored on public cloud securely using tokens without depending upon the third party to generate them. This paper presents the approach of simplifying the process of authenticating and authorizing the mobile user's by implementing middleware-centric framework called MiLAMob model with the huge online data storage system i.e. HDFS. It allows the consumer's to access the data from HDFS via mobiles or through the social networking sites eg. facebook, gmail, yahoo etc using OAuth 2.0 protocol. For authentication, the tokens are generated using one-time password generation technique and then encrypting them using AES method. By implementing the flexible user based policies and standards, this model improves the authorization process.
By identifying memory pages that external I/O operations have modified, a proposed scheme blocks malicious injected code activation, accurately distinguishing an attack from legitimate code injection with negligible performance impact and no changes to the user application.
We propose that to address the growing problems with complexity and data volumes in HPC security wee need to refactor how we look at data by creating tools that not only select data, but analyze and represent it in a manner well suited for intuitive analysis. We propose a set of rules describing what this means, and provide a number of production quality tools that represent our current best effort in implementing these ideas.
In the United States, the number of Phasor Measurement Units (PMU) will increase from 166 networked devices in 2010 to 1043 in 2014. According to the Department of Energy, they are being installed in order to “evaluate and visualize reliability margin (which describes how close the system is to the edge of its stability boundary).” However, there is still a lot of debate in academia and industry around the usefulness of phase angles as unambiguous predictors of dynamic stability. In this paper, using 4-year of actual data from Hydro-Québec EMS, it is shown that phase angles enable satisfactory predictions of power transfer and dynamic security margins across critical interface using random forest models, with both explanation level and R-squares accuracy exceeding 99%. A generalized linear model (GLM) is next implemented to predict phase angles from day-ahead to hour-ahead time frames, using historical phase angles values and load forecast. Combining GLM based angles forecast with random forest mapping of phase angles to power transfers result in a new data-driven approach for dynamic security monitoring.
Modern power systems heavily rely on the associated cyber network, and cyber attacks against the control network may cause undesired consequences such as load shedding, equipment damage, and so forth. The behaviors of the attackers can be random, thus it is crucial to develop novel methods to evaluate the adequacy of the power system under probabilistic cyber attacks. In this study, the external and internal cyber structures of the substation are introduced, and possible attack paths against the breakers are analyzed. The attack resources and vulnerability factors of the cyber network are discussed considering their impacts on the success probability of a cyber attack. A procedure integrating the reliability of physical components and the impact of cyber attacks against breakers are proposed considering the behaviors of the physical devices and attackers. Simulations are conducted based on the IEEE RTS79 system. The impact of the attack resources and attack attempt numbers are analyzed for attackers from different threats groups. It is concluded that implementing effective cyber security measures is crucial to the cyber-physical power grids.
This paper presents the design and implementation of an information flow tracking framework based on code rewrite to prevent sensitive information leaks in browsers, combining the ideas of taint and information flow analysis. Our system has two main processes. First, it abstracts the semantic of JavaScript code and converts it to a general form of intermediate representation on the basis of JavaScript abstract syntax tree. Second, the abstract intermediate representation is implemented as a special taint engine to analyze tainted information flow. Our approach can ensure fine-grained isolation for both confidentiality and integrity of information. We have implemented a proof-of-concept prototype, named JSTFlow, and have deployed it as a browser proxy to rewrite web applications at runtime. The experiment results show that JSTFlow can guarantee the security of sensitive data and detect XSS attacks with about 3x performance overhead. Because it does not involve any modifications to the target system, our system is readily deployable in practice.
Trusted Platform Module (TPM) has gained its popularity in computing systems as a hardware security approach. TPM provides the boot time security by verifying the platform integrity including hardware and software. However, once the software is loaded, TPM can no longer protect the software execution. In this work, we propose a dynamic TPM design, which performs control flow checking to protect the program from runtime attacks. The control flow checker is integrated at the commit stage of the processor pipeline. The control flow of program is verified to defend the attacks such as stack smashing using buffer overflow and code reuse. We implement the proposed dynamic TPM design in FPGA to achieve high performance, low cost and flexibility for easy functionality upgrade based on FPGA. In our design, neither the source code nor the Instruction Set Architecture (ISA) needs to be changed. The benchmark simulations demonstrate less than 1% of performance penalty on the processor, and an effective software protection from the attacks.
Security threats are irregular, sometimes very sophisticated, and difficult to measure in an economic sense. Much published data about them comes from either anecdotes or surveys and is often either not quantified or not quantified in a way that's comparable across organizations. It's hard even to separate the increase in actual danger from year to year from the increase in the perception of danger from year to year. Staffing to meet these threats is still more a matter of judgment than science, and in particular, optimizing staff allocation will likely leave your organization vulnerable at the worst times.
Being the most important critical infrastructure in Cyber-Physical Systems (CPSs), a smart grid exhibits the complicated nature of large scale, distributed, and dynamic environment. Taxonomy of attacks is an effective tool in systematically classifying attacks and it has been placed as a top research topic in CPS by a National Science Foundation (NSG) Workshop. Most existing taxonomy of attacks in CPS are inadequate in addressing the tight coupling of cyber-physical process or/and lack systematical construction. This paper attempts to introduce taxonomy of attacks of agent-based smart grids as an effective tool to provide a structured framework. The proposed idea of introducing the structure of space-time and information flow direction, security feature, and cyber-physical causality is innovative, and it can establish a taxonomy design mechanism that can systematically construct the taxonomy of cyber attacks, which could have a potential impact on the normal operation of the agent-based smart grids. Based on the cyber-physical relationship revealed in the taxonomy, a concrete physical process based cyber attack detection scheme has been proposed. A numerical illustrative example has been provided to validate the proposed physical process based cyber detection scheme.
The longstanding debate on a fundamental science of security has led to advances in systems, software, and network security. However, existing efforts have done little to inform how an environment should react to emerging and ongoing threats and compromises. The authors explore the goals and structures of a new science of cyber-decision-making in the Cyber-Security Collaborative Research Alliance, which seeks to develop a fundamental theory for reasoning under uncertainty the best possible action in a given cyber environment. They also explore the needs and limitations of detection mechanisms; agile systems; and the users, adversaries, and defenders that use and exploit them, and conclude by considering how environmental security can be cast as a continuous optimization problem.
The longstanding debate on a fundamental science of security has led to advances in systems, software, and network security. However, existing efforts have done little to inform how an environment should react to emerging and ongoing threats and compromises. The authors explore the goals and structures of a new science of cyber-decision-making in the Cyber-Security Collaborative Research Alliance, which seeks to develop a fundamental theory for reasoning under uncertainty the best possible action in a given cyber environment. They also explore the needs and limitations of detection mechanisms; agile systems; and the users, adversaries, and defenders that use and exploit them, and conclude by considering how environmental security can be cast as a continuous optimization problem.
Aside from massive advantages in safety and convenience on the road, Vehicular Ad Hoc Networks (VANETs) introduce security risks to the users. Proposals of new security concepts to counter these risks are challenging to verify because of missing real world implementations of VANETs. To fill this gap, we introduce VANETsim, an event-driven simulation platform, specifically designed to investigate application-level privacy and security implications in vehicular communications. VANETsim focuses on realistic vehicular movement on real road networks and communication between the moving nodes. A powerful graphical user interface and an experimentation environment supports the user when setting up or carrying out experiments.
Network traffic is a rich source of information for security monitoring. However the increasing volume of data to treat raises issues, rendering holistic analysis of network traffic difficult. In this paper we propose a solution to cope with the tremendous amount of data to analyse for security monitoring perspectives. We introduce an architecture dedicated to security monitoring of local enterprise networks. The application domain of such a system is mainly network intrusion detection and prevention, but can be used as well for forensic analysis. This architecture integrates two systems, one dedicated to scalable distributed data storage and management and the other dedicated to data exploitation. DNS data, NetFlow records, HTTP traffic and honeypot data are mined and correlated in a distributed system that leverages state of the art big data solution. Data correlation schemes are proposed and their performance are evaluated against several well-known big data framework including Hadoop and Spark.
Basic Input Output System (BIOS) is the most important component of a computer system by virtue of its role i.e., it holds the code which is executed at the time of startup. It is considered as the trusted computing base, and its integrity is extremely important for smooth functioning of the system. On the contrary, BIOS of new computer systems (servers, laptops, desktops, network devices, and other embedded systems) can be easily upgraded using a flash or capsule mechanism which can add new vulnerabilities either through malicious code, or by accidental incidents, and deliberate attack. The recent attack on Iranian Nuclear Power Plant (Stuxnet) [1:2] is an example of advanced persistent attack. This attack vector adds a new dimension into the information security (IS) spectrum, which needs to be guarded by implementing a holistic approach employed at enterprise level. Malicious BIOS upgrades can also cause denial of service, stealing of information or addition of new backdoors which can be exploited by attackers for causing business loss, passive eaves dropping or total destruction of system without knowledge of user. To address this challenge a capability for verification of BIOS integrity needs to be developed and due diligence must be observed for proactive resolution of the issue. This paper explains the BIOS Integrity threats and presents a prevention strategy for effective and proactive resolution.
Hardware Trojan Threats (HTTs) are stealthy components embedded inside integrated circuits (ICs) with an intention to attack and cripple the IC similar to viruses infecting the human body. Previous efforts have focused essentially on systems being compromised using HTTs and the effectiveness of physical parameters including power consumption, timing variation and utilization for detecting HTTs. We propose a novel metric for hardware Trojan detection coined as HTT detectability metric (HDM) that uses a weighted combination of normalized physical parameters. HTTs are identified by comparing the HDM with an optimal detection threshold; if the monitored HDM exceeds the estimated optimal detection threshold, the IC will be tagged as malicious. As opposed to existing efforts, this work investigates a system model from a designer perspective in increasing the security of the device and an adversary model from an attacker perspective exposing and exploiting the vulnerabilities in the device. Using existing Trojan implementations and Trojan taxonomy as a baseline, seven HTTs were designed and implemented on a FPGA testbed; these Trojans perform a variety of threats ranging from sensitive information leak, denial of service to beat the Root of Trust (RoT). Security analysis on the implemented Trojans showed that existing detection techniques based on physical characteristics such as power consumption, timing variation or utilization alone does not necessarily capture the existence of HTTs and only a maximum of 57% of designed HTTs were detected. On the other hand, 86% of the implemented Trojans were detected with HDM. We further carry out analytical studies to determine the optimal detection threshold that minimizes the summation of false alarm and missed detection probabilities.
A smart grid (SG) consists of many subsystems and networks, all working together as a system of systems, many of which are vulnerable and can be attacked remotely. Therefore, security has been identified as one of the most challenging topics in SG development, and designing a mutual authentication scheme and a key management protocol is the first important step. This paper proposes an efficient scheme that mutually authenticates a smart meter of a home area network and an authentication server in SG by utilizing an initial password, by decreasing the number of steps in the secure remote password protocol from five to three and the number of exchanged packets from four to three. Furthermore, we propose an efficient key management protocol based on our enhanced identity-based cryptography for secure SG communications using the public key infrastructure. Our proposed mechanisms are capable of preventing various attacks while reducing the management overhead. The improved efficiency for key management is realized by periodically refreshing all public/private key pairs as well as any multicast keys in all the nodes using only one newly generated function broadcasted by the key generator entity. Security and performance analyses are presented to demonstrate these desirable attributes.
For efficient deployment of sensor nodes required in many logistic applications, it's necessary to build security mechanisms for a secure wireless communication. End-to-end security plays a crucial role for the communication in these networks. This provides the confidentiality, the authentication and mostly the prevention from many attacks at high level. In this paper, we propose a lightweight key exchange protocol WSKE (Wireless Sensor Key Exchange) for IP-based wireless sensor networks. This protocol proposes techniques that allows to adapt IKEv2 (Internet Key Exchange version 2) mechanisms of IPSEC/6LoWPAN networks. In order to check these security properties, we have used a formal verification tools called AVISPA.
Establishing trust relationships between network participants by having them prove their operating system's integrity via a Trusted Platform Module (TPM) provides interesting approaches for securing local networks at a higher level. In the introduced approach on OSI layer 2, attacks carried out by already authenticated and participating nodes (insider threats) can be detected and prevented. Forbidden activities and manipulations in hard- and software, such as executing unknown binaries, loading additional kernel modules or even inserting unauthorized USB devices, are detected and result in an autonomous reaction of each network participant. The provided trust establishment and authentication protocol operates independently from upper protocol layers and is optimized for resource constrained machines. Well known concepts of backbone architectures can maintain the chain of trust between different kinds of network types. Each endpoint, forwarding and processing unit monitors the internal network independently and reports misbehaviours autonomously to a central instance in or outside of the trusted network.
We survey the state-of-the-art on the Internet-of-Things (IoT) from a wireless communications point of view, as a result of the European FP7 project BUTLER which has its focus on pervasiveness, context-awareness and security for IoT. In particular, we describe the efforts to develop so-called (wireless) enabling technologies, aimed at circumventing the many challenges involved in extending the current set of domains (“verticals”) of IoT applications towards a “horizontal” (i.e. integrated) vision of the IoT. We start by illustrating current research effort in machine-to-machine (M2M), which is mainly focused on vertical domains, and we discuss some of them in details, depicting then the necessary horizontal vision for the future intelligent daily routine (“Smart Life”). We then describe the technical features of the most relevant heterogeneous communications technologies on which the IoT relies, under the light of the on-going M2M service layer standardization. Finally we identify and present the key aspects, within three major cross-vertical categories, under which M2M technologies can function as enablers for the horizontal vision of the IoT.
Multiple Security Domains Nondeducibility, MSDND, yields results even when the attack hides important information from electronic monitors and human operators. Because MSDND is based upon modal frames, it is able to analyze the event system as it progresses rather than relying on traces of the system. Not only does it provide results as the system evolves, MSDND can point out attacks designed to be missed in other security models. This work examines information flow disruption attacks such as Stuxnet and formally explains the role that implicit trust in the cyber security of a cyber physical system (CPS) plays in the success of the attack. The fact that the attack hides behind MSDND can be used to help secure the system by modifications to break MSDND and leave the attack nowhere to hide. Modal operators are defined to allow the manipulation of belief and trust states within the model. We show how the attack hides and uses the operator's trust to remain undetected. In fact, trust in the CPS is key to the success of the attack.



