Biblio
Vulnerability Detection Tools (VDTs) have been researched and developed to prevent problems with respect to security. Such tools identify vulnerabilities that exist on the server in advance. By using these tools, administrators must protect their servers from attacks. They have, however, different results since methods for detection of different tools are not the same. For this reason, it is recommended that results are gathered from many tools rather than from a single tool but the installation which all of the tools have requires a great overhead. In this paper, we propose a novel vulnerability detection mechanism using Open API and use OpenVAS for actual testing.
The power grid is a prime target of cyber criminals and warrants special attention as it forms the backbone of major infrastructures that drive the nation's defense and economy. Developing security measures for the power grid is challenging since it is physically dispersed and interacts dynamically with associated cyber infrastructures that control its operation. This paper presents a mathematical framework to investigate stability of two area systems due to data attacks on Automatic Generation Control (AGC) system. Analytical and simulation results are presented to identify attack levels that could drive the AGC system to potentially become unstable.
This paper proposes a practical time-phased model to analyze the vulnerability of power systems over a time horizon, in which the scheduled maintenance of network facilities is considered. This model is deemed as an efficient tool that could be used by system operators to assess whether how their systems become vulnerable giving a set of scheduled facility outages. The final model is presented as a single level Mixed-Integer Linear Programming (MILP) problem solvable with commercially available software. Results attained based on the well-known IEEE 24-Bus Reliability Test System (RTS) appreciate the applicability of the model and highlight the necessity of considering the scheduled facility outages in assessing the vulnerability of a power system.
This paper considers the physical layer security for the cluster-based cooperative wireless sensor networks (WSNs), where each node is equipped with a single antenna and sensor nodes cooperate at each cluster of the network to form a virtual multi-input multi-output (MIMO) communication architecture. We propose a joint cooperative beamforming and jamming scheme to enhance the security of the WSNs where a part of sensor nodes in Alice's cluster are deployed to transmit beamforming signals to Bob while a part of sensor nodes in Bob's cluster are utilized to jam Eve with artificial noise. The optimization of beamforming and jamming vectors to minimize total energy consumption satisfying the quality-of-service (QoS) constraints is a NP-hard problem. Fortunately, through reformulation, the problem is proved to be a quadratically constrained quadratic problem (QCQP) which can be solved by solving constraint integer programs (SCIP) algorithm. Finally, we give the simulation results of our proposed scheme.
This paper has presented an approach of vTPM (virtual Trusted Platform Module) Dynamic Trust Extension (DTE) to satisfy the requirements of frequent migrations. With DTE, vTPM is a delegation of the capability of signing attestation data from the underlying pTPM (physical TPM), with one valid time token issued by an Authentication Server (AS). DTE maintains a strong association between vTPM and its underlying pTPM, and has clear distinguishability between vTPM and pTPM because of the different security strength of the two types of TPM. In DTE, there is no need for vTPM to re-acquire Identity Key (IK) certificate(s) after migration, and pTPM can have a trust revocation in real time. Furthermore, DTE can provide forward security. Seen from the performance measurements of its prototype, DTE is feasible.
Clickjacking attacks are emerging threats to websites of different sizes and shapes. They are particularly used by threat agents to get more likes and/or followers in Online Social Networks (OSNs). This paper reviews the clickjacking attacks and the classic solutions to tackle various forms of those attacks. Different approaches of Cross-Site Scripting attacks are implemented in this study to study the attack tools and methods. Various iFrame attacks have been developed to tamper with the integrity of the website interactions at the application layer. By visually demonstrating the attacks such as Cross-Site scripting (XSS) and Cross-Site Request Forgery (CSRF), users will be able to have a better understanding of such attacks in their formulation and the risks associated with them.
Modern shared memory multiprocessors permit reordering of memory operations for performance reasons. These reorderings are often a source of subtle bugs in programs written for such architectures. Traditional approaches to verify weak memory programs often rely on interleaving semantics, which is prone to state space explosion, and thus severely limits the scalability of the analysis. In recent times, there has been a renewed interest in modelling dynamic executions of weak memory programs using partial orders. However, such an approach typically requires ad-hoc mechanisms to correctly capture the data and control-flow choices/conflicts present in real-world programs. In this work, we propose a novel, conflict-aware, composable, truly concurrent semantics for programs written using C/C++ for modern weak memory architectures. We exploit our symbolic semantics based on general event structures to build an efficient decision procedure that detects assertion violations in bounded multi-threaded programs. Using a large, representative set of benchmarks, we show that our conflict-aware semantics outperforms the state-of-the-art partial-order based approaches.
We describe the formalization of a correctness proof for a conflict detection algorithm for XACML (eXtensible Access Control Markup Language). XACML is a standardized declarative access control policy language that is increasingly used in industry. In practice it is common for rule sets to grow large, and contain unintended errors, often due to conflicting rules. A conflict occurs in a policy when one rule permits a request and another denies that same request. Such errors can lead to serious risks involving both allowing access to an unauthorized user as well as denying access to someone who needs it. Removing conflicts is thus an important aspect of debugging policies, and the use of a verified algorithm provides the highest assurance in a domain where security is important. In this paper, we focus on several complex XACML constructs, including time ranges and integer intervals, as well as ways to combine any number of functions using the boolean operators and, or, and not. The latter are the most complex, and add significant expressive power to the language. We propose an algorithm to find conflicts and then use the Coq Proof Assistant to prove the algorithm correct. We develop a library of tactics to help automate the proof.
Over transactional database systems MultiVersion concurrency control is maintained for secure, fast and efficient access to the shared data file implementation scenario. An effective coordination is supposed to be set up between owners and users also the developers & system operators, to maintain inter-cloud & intra-cloud communication Most of the services & application offered in cloud world are real-time, which entails optimized compatibility service environment between master and slave clusters. In the paper, offered methodology supports replication and triggering methods intended for data consistency and dynamicity. Where intercommunication between different clusters is processed through middleware besides slave intra-communication is handled by verification & identification protection. The proposed approach incorporates resistive flow to handle high impact systems that identifies and verifies multiple processes. Results show that the new scheme reduces the overheads from different master and slave servers as they are co-located in clusters which allow increased horizontal and vertical scalability of resources.
Yamata-no-Orochi is an authentication and authorization infrastructure across multiple service domains and provides Internet services with unified authentication and authorization mechanisms. In this paper, Yamata-no-Orochi is incorporated into a video distribution system to verify its general versatility as a multi-domain authentication and authorization infrastructure for Internet services. This paper also reduces the authorization time of Yamata-no-Orochi to fulfill the processing time constrains of the video distribution system. The evaluation results show that all the authentication and authorization processes work correctly and the performance of Yamata-no-Orochi is practical for the video distribution system.
The HTTPS ecosystem, including the SSL/TLS protocol, the X.509 public-key infrastructure, and their cryptographic libraries, is the standardized foundation of Internet Security. Despite 20 years of progress and extensions, however, its practical security remains controversial, as witnessed by recent efforts to improve its design and implementations, as well as recent disclosures of attacks against its deployments. The Everest project is a collaboration between Microsoft Research, INRIA, and the community at large that aims at modelling, programming, and verifying the main HTTPS components with strong machine-checked security guarantees, down to core system and cryptographic assumptions. Although HTTPS involves a relatively small amount of code, it requires efficient low-level programming and intricate proofs of functional correctness and security. To this end, we are also improving our verifications tools (F*, Dafny, Lean, Z3) and developing new ones. In my talk, I will present our project, review our experience with miTLS, a verified reference implementation of TLS coded in F*, and describe current work towards verified, secure, efficient HTTPS.
Motor vehicles are widely used, quite valuable, and often targeted for theft. Preventive measures include car alarms, proximity control, and physical locks, which can be bypassed if the car is left unlocked, or if the thief obtains the keys. Reactive strategies like cameras, motion detectors, human patrolling, and GPS tracking can monitor a vehicle, but may not detect car thefts in a timely manner. We propose a fast automatic driver recognition system that identifies unauthorized drivers while overcoming the drawbacks of previous approaches. We factor drivers' trips into elemental driving events, from which we extract their driving preference features that cannot be exactly reproduced by a thief driving away in the stolen car. We performed real world evaluation using the driving data collected from 31 volunteers. Experiment results show we can distinguish the current driver as the owner with 97% accuracy, while preventing impersonation 91% of the time.
The Common Vulnerability Scoring System (CVSS) is the de facto standard for vulnerability severity measurement today and is crucial in the analytics driving software fortification. Required by the U.S. National Vulnerability Database, over 75,000 vulnerabilities have been scored using CVSS. We compare how the CVSS correlates with another, closely-related measure of security impact: bounties. Recent economic studies of vulnerability disclosure processes show a clear relationship between black market value and bounty payments. We analyzed the CVSS scores and bounty awarded for 703 vulnerabilities across 24 products. We found a weak (Spearmanâs Ï = 0.34) correlation between CVSS scores and bounties, with CVSS being more likely to underestimate bounty. We believe such a negative result is a cause for concern. We investigated why these measurements were so discordant by (a) analyzing the individual questions of CVSS with respect to bounties and (b) conducting a qualitative study to find the similarities and differences between CVSS and the publicly-available criteria for awarding bounties. Among our findings were that the bounty criteria were more explicit about code execution and privilege escalation whereas CVSS makes no explicit mention of those. We also found that bounty valuations are evaluated solely by project maintainers, whereas CVSS has little provenance in practice.
A major component of modern vehicles is the infotainment system, which interfaces with its drivers and passengers. Other mobile devices, such as handheld phones and laptops, can relay information to the embedded infotainment system through Bluetooth and vehicle WiFi. The ability to extract information from these systems would help forensic analysts determine the general contents that is stored in an infotainment system. Based off the data that is extracted, this would help determine what stored information is relevant to law enforcement agencies and what information is non-essential when it comes to solving criminal activities relating to the vehicle itself. This would overall solidify the Intelligent Transport System and Vehicular Ad Hoc Network infrastructure in combating crime through the use of vehicle forensics. Additionally, determining the content of these systems will allow forensic analysts to know if they can determine anything about the end-user directly and/or indirectly.
With the outgrowth of video editing tools, video information trustworthiness becomes a hypersensitive field. Today many devices have the capability of capturing digital videos such as CCTV, digital cameras and mobile phones and these videos may transmitted over the Internet or any other non secure channel. As digital video can be used to as supporting evidence, it has to be protected against manipulation or tampering. As most video authentication techniques are based on watermarking and digital signatures, these techniques are effectively used in copyright purposes but difficult to implement in other cases such as video surveillance or in videos captured by consumer's cameras. In this paper we propose an intelligent technique for video authentication which uses the video local information which makes it useful for real world applications. The proposed algorithm relies on the video's statistical local information which was applied on a dataset of videos captured by a range of consumer video cameras. The results show that the proposed algorithm has potential to be a reliable intelligent technique in digital video authentication without the need to use for SVM classifier which makes it faster and less computationally expensive in comparing with other intelligent techniques.
The identification of relevant information in large text databases is a challenging task. One of the reasons is human beings' limitations in handling large volumes of data. A common solution for scavenging data from texts are word clouds. A word cloud illustrates word usage in a document by resizing individual words in documents proportionally to how frequently they appear. Even though word clouds are easy to understand, they are not particularly efficient, because they are static. In addition, the presented information lacks context, i.e., words are not explained and they may lead to radically erroneous interpretations. To tackle these problems we developed VCloud, a tool that allows the user to interact with word clouds, therefore allowing informative and interactive data exploration. Furthermore, our tool also allows one to compare two data sets presented as word clouds. We evaluated VCloud using real data about the evolution of gastritis research through the years. The papers indexed by Pubmed related to this medical context were selected for visualization and data analysis using VCloud. A domain expert explored these visualizations, being able to extract useful information from it. This illustrates how can VCloud be a valuable tool for visual text analytics.
Analyzing and gaining insights from a large amount of textual conversations can be quite challenging for a user, especially when the discussions become very long. During my doctoral research, I have focused on integrating Information Visualization (InfoVis) with Natural Language Processing (NLP) techniques to better support the user's task of exploring and analyzing conversations. For this purpose, I have designed a visual text analytics system that supports the user exploration, starting from a possibly large set of conversations, then narrowing down to a subset of conversations, and eventually drilling-down to a set of comments of one conversation. While so far our approach is evaluated mainly based on lab studies, in my on-going and future work I plan to evaluate our approach via online longitudinal studies.
With an immense number of threats pouring in from nation states and hacktivists as well as terrorists and cybercriminals, the requirement of a globally secure infrastructure becomes a major obligation. Most critical infrastructures were primarily designed to work isolated from the normal communication network, but due to the advent of the "Smart Grid" that uses advanced and intelligent approaches to control critical infrastructure, it is necessary for these cyber-physical systems to have access to the communication system. Consequently, such critical systems have become prime targets; hence security of critical infrastructure is currently one of the most challenging research problems. Performing an extensive security analysis involving experiments with cyber-attacks on a live industrial control system (ICS) is not possible. Therefore, researchers generally resort to test beds and complex simulations to answer questions related to SCADA systems. Since all conclusions are drawn from the test bed, it is necessary to perform validation against a physical model. This paper examines the fidelity of a virtual SCADA testbed to a physical test bed and allows for the study of the effects of cyber- attacks on both of the systems.
This study examines the effectiveness of virtual reality technology at creating an immersive user experience in which participants experience first hand the extreme negative consequences of smartphone use while driving. Research suggests that distracted driving caused by smartphones is related to smartphone addiction and causes fatalities. Twenty-two individuals participated in the virtual reality user experience (VRUE) in which they were asked to drive a virtual car using a Oculus Rift headset, LeapMotion hand tracking device, and a force feedback steering wheel and pedals. While driving in the simulation participants were asked to interact with a smartphone and after a period of time trying to manage both tasks a vehicle appears before them and they are involved in a head-on collision. Initial results indicated a strong sense of presence was felt by participants and a change or re-enforcement of the participant's perception of the dangers of smartphone use while driving was observed.
We study the value of data privacy in a game-theoretic model of trading private data, where a data collector purchases private data from strategic data subjects (individuals) through an incentive mechanism. The private data of each individual represents her knowledge about an underlying state, which is the information that the data collector desires to learn. Different from most of the existing work on privacy-aware surveys, our model does not assume the data collector to be trustworthy. Then, an individual takes full control of its own data privacy and reports only a privacy-preserving version of her data. In this paper, the value of ε units of privacy is measured by the minimum payment of all nonnegative payment mechanisms, under which an individual's best response at a Nash equilibrium is to report the data with a privacy level of ε. The higher ε is, the less private the reported data is. We derive lower and upper bounds on the value of privacy which are asymptotically tight as the number of data subjects becomes large. Specifically, the lower bound assures that it is impossible to use less amount of payment to buy ε units of privacy, and the upper bound is given by an achievable payment mechanism that we designed. Based on these fundamental limits, we further derive lower and upper bounds on the minimum total payment for the data collector to achieve a given learning accuracy target, and show that the total payment of the designed mechanism is at most one individual's payment away from the minimum.
Parallel discrete-event simulation (PDES) is an important tool in the codesign of extreme-scale systems because PDES provides a cost-effective way to evaluate designs of high-performance computing systems. Optimistic synchronization algorithms for PDES, such as Time Warp, allow events to be processed without global synchronization among the processing elements. A rollback mechanism is provided when events are processed out of timestamp order. Although optimistic synchronization protocols enable the scalability of large-scale PDES, the performance of the simulations must be tuned to reduce the number of rollbacks and provide an improved simulation runtime. To enable efficient large-scale optimistic simulations, one has to gain insight into the factors that affect the rollback behavior and simulation performance. We developed a tool for ROSS model developers that gives them detailed metrics on the performance of their large-scale optimistic simulations at varying levels of simulation granularity. Model developers can use this information for parameter tuning of optimistic simulations in order to achieve better runtime and fewer rollbacks. In this work, we instrument the ROSS optimistic PDES framework to gather detailed statistics about the simulation engine. We have also developed an interactive visualization interface that uses the data collected by the ROSS instrumentation to understand the underlying behavior of the simulation engine. The interface connects real time to virtual time in the simulation and provides the ability to view simulation data at different granularities. We demonstrate the usefulness of our framework by performing a visual analysis of the dragonfly network topology model provided by the CODES simulation framework built on top of ROSS. The instrumentation needs to minimize overhead in order to accurately collect data about the simulation performance. To ensure that the instrumentation does not introduce unnecessary overhead, we perform a scaling study that compares instrumented ROSS simulations with their noninstrumented counterparts in order to determine the amount of perturbation when running at different simulation scales.
Wearable devices, which are small electronic devices worn on a human body, are equipped with low level of processing and storage capacities and offer some types of integrated functionalities. Recently, wearable device is becoming increasingly popular, various kinds of wearable device are launched in the market; however, wearable devices require a powerful local-hub, most are smartphone, to replenish processing and storage capacities for advanced functionalities. Sometime it may be inconvenient to carry the local-hub (smartphone); thus, many wearable devices are equipped with Wi-Fi interface, enabling them to exchange data with local-hub though the Internet when the local-hub is not nearby. However, this results in long response time and restricted functionalities. In this paper, we present a virtual local-hub solution, which utilizes network equipment nearby (e.g., Wi-Fi APs) as the local-hub. Since migrating all applications serving the wearable devices respectively takes too much networking and storage resources, the proposed solution deploys function modules to multiple network nodes and enables remote function module sharing among different users and applications. To reduce the impact of the solution on the network bandwidth, we propose a heuristic algorithm for function module allocation with the objective of minimizing total bandwidth consumption. We conduct series of experiments, and the results show that the proposed solution can reduce the bandwidth consumption by up to half and still serve all requests given a large number of service requests.