Biblio
Cross-Site Scripting (XSS) is a common attack technique that lets attackers insert the code in the output application of web page which is referred to the web browser of visitor and then the inserted code executes automatically and steals the sensitive information. In order to prevent the users from XSS attack, many client- side solutions have been implemented; most of them being used are the filters that sanitize the malicious input. However, many of these filters do not provide prevention to the newly designed sophisticated attacks such as multiple points of injection, injection into script etc. This paper proposes and implements an approach based on encoding unfiltered reflections for detecting vulnerable web applications which can be exploited using above mentioned sophisticated attacks. Results prove that the proposed approach provides accurate higher detection rate of exploits. In addition to this, an implementation of blocking the execution of malicious scripts have contributed to XSS-Me: an open source Mozilla Firefox security extension that detects for reflected XSS vulnerabilities which can be considered as an effective solution if it is integrated inside the browser rather than being enforced as an extension.
The development of data communications enabling the exchange of information via mobile devices more easily. Security in the exchange of information on mobile devices is very important. One of the weaknesses in steganography is the capacity of data that can be inserted. With compression, the size of the data will be reduced. In this paper, designed a system application on the Android platform with the implementation of LSB steganography and cryptography using TEA to the security of a text message. The size of this text message may be reduced by performing lossless compression technique using LZW method. The advantages of this method is can provide double security and more messages to be inserted, so it is expected be a good way to exchange information data. The system is able to perform the compression process with an average ratio of 67.42 %. Modified TEA algorithm resulting average value of avalanche effect 53.8%. Average result PSNR of stego image 70.44 dB. As well as average MOS values is 4.8.
This paper proposed a MIMO cross-layer precoding secure communications via pattern controlled by higher layer cryptography. By contrast to physical layer security system, the proposed scheme could enhance the security in adverse situations where the physical layer security hardly to be deal with. Two One typical situation is considered. One is that the attackers have the ideal CSI and another is eavesdropper's channel are highly correlated to legitimate channel. Our scheme integrates the upper layer with physical layer secure together to gaurantee the security in real communication system. Extensive theoretical analysis and simulations are conducted to demonstrate its effectiveness. The proposed method is feasible to spread in many other communicate scenarios.
Security companies have recently realised that mining massive amounts of security data can help generate actionable intelligence and improve their understanding of Internet attacks. In particular, attack attribution and situational understanding are considered critical aspects to effectively deal with emerging, increasingly sophisticated Internet attacks. This requires highly scalable analysis tools to help analysts classify, correlate and prioritise security events, depending on their likely impact and threat level. However, this security data mining process typically involves a considerable amount of features interacting in a non-obvious way, which makes it inherently complex. To deal with this challenge, we introduce MR-TRIAGE, a set of distributed algorithms built on MapReduce that can perform scalable multi-criteria data clustering on large security data sets and identify complex relationships hidden in massive datasets. The MR-TRIAGE workflow is made of a scalable data summarisation, followed by scalable graph clustering algorithms in which we integrate multi-criteria evaluation techniques. Theoretical computational complexity of the proposed parallel algorithms are discussed and analysed. The experimental results demonstrate that the algorithms can scale well and efficiently process large security datasets on commodity hardware. Our approach can effectively cluster any type of security events (e.g., spam emails, spear-phishing attacks, etc) that are sharing at least some commonalities among a number of predefined features.
The National Cyber Range (NCR) is an innovative Department of Defense (DoD) resource originally established by the Defense Advanced Research Projects Agency (DARPA) and now under the purview of the Test Resource Management Center (TRMC). It provides a unique environment for cyber security testing throughout the program development life cycle using unique methods to assess resiliency to advanced cyberspace security threats. This paper describes what a cyber security range is, how it might be employed, and the advantages a program manager (PM) can gain in applying the results of range events. Creating realism in a test environment isolated from the operational environment is a special challenge in cyberspace. Representing the scale and diversity of the complex DoD communications networks at a fidelity detailed enough to realistically portray current and anticipated attack strategies (e.g., Malware, distributed denial of service attacks, cross-site scripting) is complex. The NCR addresses this challenge by representing an Internet-like environment by employing a multitude of virtual machines and physical hardware augmented with traffic emulation, port/protocol/service vulnerability scanning, and data capture tools. Coupled with a structured test methodology, the PM can efficiently and effectively engage with the Range to gain cyberspace resiliency insights. The NCR capability, when applied, allows the DoD to incorporate cyber security early to avoid high cost integration at the end of the development life cycle. This paper provides an overview of the resources of the NCR which may be especially helpful for DoD PMs to find the best approach for testing the cyberspace resiliency of their systems under development.
In this paper we introduce PADAVAN, a novel anonymous data collection scheme for Vehicular Ad Hoc Networks (VANETs). PADAVAN allows users to submit data anonymously to a data consumer while preventing adversaries from submitting large amounts of bogus data. PADAVAN is comprised of an n-times anonymous authentication scheme, mix cascades and various principles to protect the privacy of the submitted data itself. Furthermore, we evaluate the effectiveness of limiting an adversary to a fixed amount of messages.
In an attempt to support customization, many web applications allow the integration of third-party server-side plugins that offer diverse functionality, but also open an additional door for security vulnerabilities. In this paper we study the use of static code analysis tools to detect vulnerabilities in the plugins of the web application. The goal is twofold: 1) to study the effectiveness of static analysis on the detection of web application plugin vulnerabilities, and 2) to understand the potential impact of those plugins in the security of the core web application. We use two static code analyzers to evaluate a large number of plugins for a widely used Content Manage-ment System. Results show that many plugins that are current-ly deployed worldwide have dangerous Cross Site Scripting and SQL Injection vulnerabilities that can be easily exploited, and that even widely used static analysis tools may present disappointing vulnerability coverage and false positive rates.
Computing systems today have a large number of security configuration settings that enforce security properties. However, vulnerabilities and incorrect configuration increase the potential for attacks. Provable verification and simulation tools have been introduced to eliminate configuration conflicts and weaknesses, which can increase system robustness against attacks. Most of these tools require special knowledge in formal methods and precise specification for requirements in special languages, in addition to their excessive need for computing resources. Video games have been utilized by researchers to make educational software more attractive and engaging. Publishing these games for crowdsourcing can also stimulate competition between players and increase the game educational value. In this paper we introduce a game interface, called NetMaze, that represents the network configuration verification problem as a video game and allows for attack analysis. We aim to make the security analysis and hardening usable and accurately achievable, using the power of video games and the wisdom of crowdsourcing. Players can easily discover weaknesses in network configuration and investigate new attack scenarios. In addition, the gameplay scenarios can also be used to analyze and learn attack attribution considering human factors. In this paper, we present a provable mapping from the network configuration to 3D game objects.
Data is one of the most valuable assets for organization. It can facilitate users or organizations to meet their diverse goals, ranging from scientific advances to business intelligence. Due to the tremendous growth of data, the notion of big data has certainly gained momentum in recent years. Cloud computing is a key technology for storing, managing and analyzing big data. However, such large, complex, and growing data, typically collected from various data sources, such as sensors and social media, can often contain personally identifiable information (PII) and thus the organizations collecting the big data may want to protect their outsourced data from the cloud. In this paper, we survey our research towards development of efficient and effective privacy-enhancing (PE) techniques for management and analysis of big data in cloud computing.We propose our initial approaches to address two important PE applications: (i) privacy-preserving data management and (ii) privacy-preserving data analysis under the cloud environment. Additionally, we point out research issues that still need to be addressed to develop comprehensive solutions to the problem of effective and efficient privacy-preserving use of data.
During recent years, establishing proper metrics for measuring system security has received increasing attention. Security logs contain vast amounts of information which are essential for creating many security metrics. Unfortunately, security logs are known to be very large, making their analysis a difficult task. Furthermore, recent security metrics research has focused on generic concepts, and the issue of collecting security metrics with log analysis methods has not been well studied. In this paper, we will first focus on using log analysis techniques for collecting technical security metrics from security logs of common types (e.g., Network IDS alarm logs, workstation logs, and Net flow data sets). We will also describe a production framework for collecting and reporting technical security metrics which is based on novel open-source technologies for big data.
Autonomic networks and services are exposed to a large variety of security risks. The vulnerability management process plays a crucial role for ensuring their safe configurations and preventing security attacks. We focus in this survey on the assessment of vulnerabilities in autonomic environments. In particular, we analyze current methods and techniques contributing to the discovery, the description and the detection of these vulnerabilities. We also point out important challenges that should be faced in order to fully integrate this process into the autonomic management plane.
This paper presents a survey on cyber security issues in in current industrial automation and control systems, which also includes observations and insights collected and distilled through a series of discussion by some of major Japanese experts in this field. It also tries to provide a conceptual framework of those issues and big pictures of some ongoing projects to try to enhance it.
In this paper we propose a methodology and a prototype tool to evaluate web application security mechanisms. The methodology is based on the idea that injecting realistic vulnerabilities in a web application and attacking them automatically can be used to support the assessment of existing security mechanisms and tools in custom setup scenarios. To provide true to life results, the proposed vulnerability and attack injection methodology relies on the study of a large number of vulnerabilities in real web applications. In addition to the generic methodology, the paper describes the implementation of the Vulnerability & Attack Injector Tool (VAIT) that allows the automation of the entire process. We used this tool to run a set of experiments that demonstrate the feasibility and the effectiveness of the proposed methodology. The experiments include the evaluation of coverage and false positives of an intrusion detection system for SQL Injection attacks and the assessment of the effectiveness of two top commercial web application vulnerability scanners. Results show that the injection of vulnerabilities and attacks is indeed an effective way to evaluate security mechanisms and to point out not only their weaknesses but also ways for their improvement.
This paper presents FlowNAC, a Flow-based Network Access Control solution that allows to grant users the rights to access the network depending on the target service requested. Each service, defined univocally as a set of flows, can be independently requested and multiple services can be authorized simultaneously. Building this proposal over SDN principles has several benefits: SDN adds the appropriate granularity (fine-or coarse-grained) depending on the target scenario and flexibility to dynamically identify the services at data plane as a set of flows to enforce the adequate policy. FlowNAC uses a modified version of IEEE 802.1X (novel EAPoL-in-EAPoL encapsulation) to authenticate the users (without the need of a captive portal) and service level access control based on proactive deployment of flows (instead of reactive). Explicit service request avoids misidentifying the target service, as it could happen by analyzing the traffic (e.g. private services). The proposal is evaluated in a challenging scenario (concurrent authentication and authorization processes) with promising results.
File encryption is an effective way for an enterprise to prevent its data from being lost. However, the data may still be deliberately or inadvertently leaked out by the insiders or customers. When the sensitive data are leaked, it often results in huge monetary damages and credit loss. In this paper, we propose a novel group file encryption/decryption method, named the Group File Encryption Method using Dynamic System Environment Key (GEMS for short), which provides users with auto crypt, authentication, authorization, and auditing security schemes by utilizing a group key and a system environment key. In the GEMS, the important parameters are hidden and stored in different devices to avoid them from being cracked easily. Besides, it can resist known-key and eavesdropping attacks to achieve a very high security level, which is practically useful in securing an enterprise's and a government's private data.
In the last decade, the request for Internet access in heterogeneous environments keeps on growing, principally in mobile platforms such as buses, airplanes and trains. Consequently, several extensions and schemes have been introduced to achieve seamless handoff of mobile networks from one subnet to another. Even with these enhancements, the problem of maintaining the security concerns and availability has not been resolved yet, especially, the absence of authentication mechanism between network entities in order to avoid vulnerability from attacks. To eliminate the threats on the interface between the mobile access gateway (MAG) and the mobile router (MR) in improving fast PMIPv6-based network mobility (IFP-NEMO) protocol, we propose a lightweight mutual authentication mechanism in improving fast PMIPv6-based network mobility scheme (LMAIFPNEMO). This scheme uses authentication, authorization and accounting (AAA) servers to enhance the security of the protocol IFP-NEMO which allows the integration of improved fast proxy mobile IPv6 (PMIPv6) in network mobility (NEMO). We use only symmetric cryptographic, generated nonces and hash operation primitives to ensure a secure authentication procedure. Then, we analyze the security aspect of the proposed scheme and evaluate it using the automated validation of internet security protocols and applications (AVISPA) software which has proved that authentication goals are achieved.
Practical intrusion detection in Wireless Multihop Networks (WMNs) is a hard challenge. The distributed nature of the network makes centralized intrusion detection difficult, while resource constraints of the nodes and the characteristics of the wireless medium often render decentralized, node-based approaches impractical. We demonstrate that an active-probing-based network intrusion detection system (AP-NIDS) is practical for WMNs. The key contribution of this paper is to optimize the active probing process: we introduce a general Bayesian model and design a probe selection algorithm that reduces the number of probes while maximizing the insights gathered by the AP-NIDS. We validate our model by means of testbed experimentation. We integrate it to our open source AP-NIDS DogoIDS and run it in an indoor wireless mesh testbed utilizing the IEEE 802.11s protocol. For the example of a selective packet dropping attack, we develop the detection states for our Bayes model, and show its feasibility. We demonstrate that our approach does not need to execute the complete set of probes, yet we obtain good detection rates.
Techniques for network security analysis have historically focused on the actions of the network hosts. Outside of forensic analysis, little has been done to detect or predict malicious or infected nodes strictly based on their association with other known malicious nodes. This methodology is highly prevalent in the graph analytics world, however, and is referred to as community detection. In this paper, we present a method for detecting malicious and infected nodes on both monitored networks and the external Internet. We leverage prior community detection and graphical modeling work by propagating threat probabilities across network nodes, given an initial set of known malicious nodes. We enhance prior work by employing constraints that remove the adverse effect of cyclic propagation that is a byproduct of current methods. We demonstrate the effectiveness of probabilistic threat propagation on the tasks of detecting botnets and malicious web destinations.
As web-server spoofing is increasing, we investigate a novel technology termed ICmetrics, used to identify fraud for given software/hardware programs based on measurable quantities/features. ICmetrics technology is based on extracting features from digital systems' operation that may be integrated together to generate unique identifiers for each of the systems or create unique profiles that describe the systems' actual behavior. This paper looks at the properties of the several behaviors as a potential ICmetrics features to identify android apps, it presents several quality features which meet the ICmetrics requirements and can be used for encryption key generation. Finally, the paper identifies four android apps and verifies the use of ICmetrics by identifying a spoofed app as a different app altogether.
In this paper we present a secure and unclonable embedded system design that can target either an FPGA or an ASIC technology. The premise of the security is that the executed machine code and the executing environment (the embedded processor) will authenticate each other at a per-instruction basis using Physical Unclonable Functions (PUFs) that are built into the processor. The PUFs ensure that the execution of the binary code may only proceed if the binary is compiled with the correct intrinsic knowledge of the PUFs, and that such intrinsic knowledge is virtually unique to each processor and therefore unclonable. We will explain how to implement and integrate the PUFs into the processor's execution environment such that each instruction is authenticated and de-obfuscated on-demand and how to transform an ordinary binary executable into PUF-aware, obfuscated binaries. We will also present a prototype system on a Xilinx Spartan6-based FPGA board.
We are currently living in the age of Big Data coming along with the challenge to grasp the golden opportunities at hand. This mixed blessing also dominates the relation between Big Data and trust. On the one side, large amounts of trust-related data can be utilized to establish innovative data-driven approaches for reputation-based trust management. On the other side, this is intrinsically tied to the trust we can put in the origins and quality of the underlying data. In this paper, we address both sides of trust and Big Data by structuring the problem domain and presenting current research directions and inter-dependencies. Based on this, we define focal issues which serve as future research directions for the track to our vision of Next Generation Online Trust within the FORSEC project.
Cover time measures the time (or number of steps) required for a mobile agent to visit each node in a network (graph) at least once. A short cover time is important for search or foraging applications that require mobile agents to quickly inspect or monitor nodes in a network, such as providing situational awareness or security. Speed can be achieved if details about the graph are known or if the agent maintains a history of visited nodes, however, these requirements may not be feasible for agents with limited resources, they are difficult in dynamic graph topologies, and they do not easily scale to large networks. This paper introduces a set-based form of heading (directional bias) that allows an agent to more efficiently explore any connected graph, static or dynamic. When deciding the next node to visit, agents are discouraged from visiting nodes that neighbor both their previous and current locations. Modifying a traditional movement method, e.g., random walk, with this concept encourages an agent to move toward nodes that are less likely to have been previously visited, reducing cover time. Simulation results with grid, scale-free, and minimum distance graphs demonstrate heading can consistently reduce cover time as compared to non-heading movement techniques.
One of the important direction of research in situational awareness is implementation of visual analytics techniques which can be efficiently applied when working with big security data in critical operational domains. The paper considers a visual analytics technique for displaying a set of security metrics used to assess overall network security status and evaluate the efficiency of protection mechanisms. The technique can assist in solving such security tasks which are important for security information and event management (SIEM) systems. The approach suggested is suitable for displaying security metrics of large networks and support historical analysis of the data. To demonstrate and evaluate the usefulness of the proposed technique we implemented a use case corresponding to the Olympic Games scenario.
Our vision in this paper is that agency, as the individual ability to intervene and tailor the system, is a crucial element in building trust in IoT technologies. Following up on this vision, we will first address the issue of agency, namely the individual capability to adopt free decisions, as a relevant driver in building trusted human-IoT relations, and how agency should be embedded in digital systems. Then we present the main challenges posed by existing approaches to implement this vision. We show then our proposal for a model-based approach that realizes the agency concept, including a prototype implementation.