Biblio
Image retrieval systems have been an active area of research for more than thirty years progressively producing improved algorithms that improve performance metrics, operate in different domains, take advantage of different features extracted from the images to be retrieved, and have different desirable invariance properties. With the ever-growing visual databases of images and videos produced by a myriad of devices comes the challenge of selecting effective features and performing fast retrieval on such databases. In this paper, we incorporate Fourier descriptors (FD) along with a metric-based balanced indexing tree as a viable solution to DHS (Department of Homeland Security) needs to for quick identification and retrieval of weapon images. The FDs allow a simple but effective outline feature representation of an object, while the M-tree provide a dynamic, fast, and balanced search over such features. Motivated by looking for applications of interest to DHS, we have created a basic guns and rifles databases that can be used to identify weapons in images and videos extracted from media sources. Our simulations show excellent performance in both representation and fast retrieval speed.
The main goal of this work is to create a model of trust which can be considered as a reference for developing applications oriented on collaborative annotation. Such a model includes design parameters inferred from online communities operated on collaborative content. This study aims to create a static model, but it could be dynamic or more than one model depending on the context of an application. An analysis on Genius as a peer production community was done to understand user behaviors. This study characterizes user interactions based on the differentiation between Lightweight Peer Production (LWPP) and Heavyweight Peer Production (HWPP). It was found that more LWPP- interactions take place in the lower levels of this system. As the level in the role system increases, there will be more HWPP-interactions. This can be explained as LWPP-interacions are straightforward, while HWPP-interations demand more agility by the user. These provide more opportunities and therefore attract other users for further interactions.
App vetting is the process of approving or rejecting an app prior to deployment on a mobile device. • The decision to approve or reject an app is based on the organization's security requirements and the type and severity of security vulnerabilities found in the app. • Security vulnerabilities including Cross Site Scripting (XSS), information leakage, authentication and authorization, session management, and SQL injection can be exploited to steal information or control a device.
Every day, university networks are bombarded with attempts to steal the sensitive data of the various disparate domains and organizations they serve. For this reason, universities form teams of information security specialists called a Security Operations Center (SOC) to manage the complex operations involved in monitoring and mitigating such attacks. When a suspicious event is identified, members of the SOC are tasked to understand the nature of the event in order to respond to any damage the attack might have caused. This process is defined by administrative policies which are often very high-level and rarely systematically defined. This impedes the implementation of generalized and automated event response solutions, leading to specific ad hoc solutions based primarily on human intuition and experience as well as immediate administrative priorities. These solutions are often fragile, highly specific, and more difficult to reuse in other scenarios.
Denial of service (DoS) is a process of injecting malicious packets into the network. Intrusion detection system (IDS) is a system used to investigate malicious packets in the network. Software-defined network (SDN) physically separates control plane and data plane. The control plane is moved to a centralized controller, and it makes a decision in the network from a global view. The combination between IDS and SDN allows the prevention of malicious packets to be more efficient due to the advantage of the global view in SDN. IDS needs to communicate with switches to have an access to all end-to-end traffic in the network. The high traffic in the link between switches and IDS results in congestion. The congestion between switches and IDS delays the detection and prevention of malicious traffic. To address this problem, we propose a historical database (Hdb), a scheme to reduce the traffic between switches and IDS, based on the historical information of a sender. The simulation shows that in the average, 54.1% of traffic mirrored to IDS is reduced compared to the conventional schemes.
Cyber-Physical Systems (CPS) will form the basis for the world's critical infrastructure and, thus, have the potential to significantly impact human lives in the near future. In recent years, there has been an increasing demand for connectivity in CPS, which has brought to attention the issue of cyber security. Aside from traditional information systems threats, CPS faces new challenges due to the heterogeneity of devices and protocols. In this paper, we investigate how Feature Selection may improve intrusion detection accuracy. In particular, we propose an adapted Greedy Randomized Adaptive Search Procedure (GRASP) metaheuristic to improve the classification performance in CPS perception layer. Our numerical results reveal that GRASP metaheuristic overcomes traditional filter-based feature selection methods for detecting four attack classes in CPSs.
The detection of bugs in software systems has been divided into two research areas: static code analysis and statistical modeling of historical data. Static analysis indicates precise problems on line numbers but has the disadvantage of suggesting many warning which are often false positives. In contrast, statistical models use the history of the system to suggest which files or commits are likely to contain bugs. These course-grained predictions do not indicate to the developer the precise reasons for the bug prediction. We combine static analysis with statistical bug models to limit the number of warnings and provide specific warnings information at the line level. Previous research was able to process only a limited number of releases, our tool, WarningsGuru, can analyze all commits in a source code repository and we currently have processed thousands of commits and warnings. Since we process every commit, we present developers with more precise information about when a warning is introduced allowing us to show recent warnings that are introduced in statistically risky commits. Results from two OSS projects show that CommitGuru's statistical model flags 25% and 29% of all commits as risky. When we combine this with static analysis in WarningsGuru the number of risky commits with warnings is 20% for both projects and the number commits with new warnings is only 3% and 6%. We can drastically reduce the number of commits and warnings developers have to examine. The tool, source code, and demo is available at https://github.com/louisq/warningsguru.
Defect-prediction techniques can enhance the quality assurance activities for software systems. For instance, they can be used to predict bugs in source files or functions. In the context of a software product line, such techniques could ideally be used for predicting defects in features or combinations of features, which would allow developers to focus quality assurance on the error-prone ones. In this preliminary case study, we investigate how defect prediction models can be used to identify defective features using machine-learning techniques. We adapt process metrics and evaluate and compare three classifiers using an open-source product line. Our results show that the technique can be effective. Our best scenario achieves an accuracy of 73 % for accurately predicting features as defective or clean using a Naive Bayes classifier. Based on the results we discuss directions for future work.
Defect-prediction techniques can enhance the quality assurance activities for software systems. For instance, they can be used to predict bugs in source files or functions. In the context of a software product line, such techniques could ideally be used for predicting defects in features or combinations of features, which would allow developers to focus quality assurance on the error-prone ones. In this preliminary case study, we investigate how defect prediction models can be used to identify defective features using machine-learning techniques. We adapt process metrics and evaluate and compare three classifiers using an open-source product line. Our results show that the technique can be effective. Our best scenario achieves an accuracy of 73 % for accurately predicting features as defective or clean using a Naive Bayes classifier. Based on the results we discuss directions for future work.
Flexibility and speed in the development of new industrial machines are essential factors for the success of capital goods industries. When assembling a printed circuit board (PCB), since all the components are surface mounted devices (SMD), the whole process is automatic. However, in many PCBs, it is necessary to place components that are not SMDs, called pin through hole components (PTH), having to be inserted manually, which leads to delays in the production line. This work proposes and validates a prototype work cell based on a collaborative robot and vision systems whose objective is to insert these components in a completely autonomous or semi-autonomous way. Different tests were made to validate this work cell, showing the correct implementation and the possibility of replacing the human worker on this PCB assembly task.
Ubiquitous Healthcare System (U-Healthcare) is a well-known application of wireless sensor networking (WSN). In this system, the sensors take less power for operating the function. As the data transfers between sensor and other stations is sensitive so there needs to provide a security scheme. Due to the low life of sensor nodes in Wireless Sensor Networks (WSN), asymmetric key based security (AKS) architecture is always considered as unsuitable for these types of networks. Several papers have been published in recent past years regarding how to incorporate AKS in WSN, Haque et al's Asymmetric key based Architecture (AKA) is one of them. But later it is found that this system has authentication problem and therefore prone to man-in-the-middle (MITM) attack, furthermore it is not a truly asymmetric based scheme. We address these issues in this paper and proposed a complete asymmetric approach using PEKS-PM (proposed by Pham in [8]) to remove impersonation attack. We also found some other vulnerabilities in the original AKA system and proposed solutions, therefore making it a better and enhanced asymmetric key based architecture.
Abstract. Multi-agent cyber-physical systems (CPSs) are ubiquitous in modern infrastructure systems, including the future smart grid, transportation networks, and public health systems. Security of these systems are critical for normal operation of our society. In this paper, we focus on physical layer resilient control of these systems subject to cyber attacks and malicious behaviors of physical agents. We establish a cross-layer system model for the investigation of cross-layer coupling and performance interdependencies for CPSs. In addition, we study a twosystem synchronization problem in which one is a malicious agent who intends to mislead the entire system behavior through physical layer interactions. Feedback Nash equilibrium is used as the solution concept for the distributed control in the multi-agent system environment. We corroborate our results with numerical examples, which show the performance interdependencies between two CPSs through cyber and physical interactions.
The smart grid is an ever-growing complex dynamic system with multiple interleaved layers and a large number of interacting components. In this talk, we discuss how game-theoretic tools can be used as an analytical tool to understand strategic interactions at different layers of the system and between different decision-making entities for distributed management of energy resources. We first investigate the issue of integration of renewable energy resources into the power grid. We establish a game-theoretic framework for modeling the strategic behavior of buses that are connected to renewable energy resources, and study the Nash equilibrium solution of distributed power generation at each bus. Our framework uses a cross-layer approach, taking into account the economic factors as well as system stability issues at the physical layer. In the second part of the talk, we discuss the issue of integration of plug-in electric vehicles (PHEVs) for vehicle-to-grid (V2G) transactions on the smart grid. Electric vehicles will be capable of buying and selling energy from smart parking lots in the future. We propose a multi-resolution and multi-layer stochastic differential game framework to study the dynamic decision-making process among PHEVs. We analyze the stochastic game in a large-population regime and account for the multiple types of interactions in the grid. Using these two settings, we demonstrate that game theory is a versatile tool to address many fundamental and emerging issues in the smart grid.
Presented at the Eighth Annual Carnegie Mellon Conference on the Electricity Industry Data-Driven Sustainable Engergy Systems in Pittsburgh, PA, March 12-14, 2012.
Traditional intrusion detection systems (IDSs) work in isolation and can be easily compromised by unknown threats. An intrusion detection network (IDN) is a collaborative IDS network intended to overcome this weakness by allowing IDS peers to share detection knowledge and experience, and hence improve the overall accuracy of intrusion assessment. In this work, we design an IDN system, called GUIDEX, using gametheoretic modeling and trust management for peers to collaborate truthfully and actively. We first describe the system architecture and its individual components, and then establish a gametheoretic framework for the resource management component of GUIDEX. We establish the existence and uniqueness of a Nash equilibrium under which peers can communicate in a reciprocal incentive compatible manner. Based on the duality of the problem, we develop an iterative algorithm that converges geometrically to the equilibrium. Our numerical experiments and discrete event simulation demonstrate the convergence to the Nash equilibrium and the security features of GUIDEX against free riders, dishonest insiders and DoS attacks
Wireless sensor networks are subject to attacks such as node capture and cloning, where an attacker physically captures sensor nodes, replicates the nodes, which are deployed into the network, and proceeds to take over the network. In this paper, we develop models for such an attack when there are multiple attackers in a network, and formulate multi-player games to model the noncooperative strategic behavior between the attackers and the network. We consider two cases: a static case where the attackers’ node capture rates are time-invariant and the network’s clone detection/revocation rate is a linear function of the state, and a dynamic case where the rates are general functions of time. We characterize Nash equilibrium solutions for both cases and derive equilibrium strategies for the players. In the static case, we study both the single-attacker and the multi-attacker games within an optimization framework, provide conditions for the existence of Nash equilibria and characterize them in closed forms. In the dynamic case, we study the underlying multi-person differential game under an open-loop information structure and provide a set of conditions to characterize the open-loop Nash equilibrium. We show the equivalence of the Nash equilibrium for the multi-person game to the saddle-point equilibrium between the network and the attackers as a team. We illustrate our results with numerical examples.
The use of a shared medium leaves wireless networks, including mobile ad hoc and sensor networks, vulnerable to jamming attacks. In this paper, we introduce a jamming defense mechanism for multiple-path routing networks based on maintaining deceptive flows, consisting of fake packets, between a source and a destination. An adversary observing a deceptive flow will expend energy on disrupting the fake packets, allowing the real data packets to arrive at the destination unharmed. We model this deceptive flow-based defense within a multi-stage stochastic game framework between the network nodes, which choose a routing path and flow rates for the real and fake data, and an adversary, which chooses which fraction of each flow to target at each hop. We develop an efficient, distributed procedure for computing the optimal routing at each hop and the optimal flow allocation at the destination. Furthermore, by studying the equilibria of the game, we quantify the benefit arising from deception, as reflected in an increase in the valid throughput. Our results are demonstrated via a simulation study.
The migration of many current critical infrastructures, such as power grids and transportations systems, into open publicnetworks has posed many challenges in control systems. Modern control systems face uncertainties not only from the physical world but also from the cyber space. In this paper, we propose a hybrid game-theoretic approach to investigate the coupling between cyber security policy and robust control design. We study in detail the case of cascading failures in industrial control systems and provide a set of coupled optimality criteria in the linear-quadratic case. This approach can be further extended to more general cases of parallel cascading failures.