Biblio
Cyberspace is the fifth largest activity space after land, sea, air and space. Safeguarding Cyberspace Security is a major issue related to national security, national sovereignty and the legitimate rights and interests of the people. With the rapid development of artificial intelligence technology and its application in various fields, cyberspace security is facing new challenges. How to help the network security personnel grasp the security trend at any time, help the network security monitoring personnel respond to the alarm information quickly, and facilitate the tracking and processing of the monitoring personnel. This paper introduces a method of using situational awareness micro application actual combat attack and defense robot to quickly feed back the network attack information to the monitoring personnel, timely report the attack information to the information reporting platform and automatically block the malicious IP.
The 2018 Biometric Technology Rally was an evaluation, sponsored by the U.S. Department of Homeland Security, Science and Technology Directorate (DHS S&T), that challenged industry to provide face or face/iris systems capable of unmanned, traveler identification in a high-throughput security environment. Selected systems were installed at the Maryland Test Facility (MdTF), a DHS S&T affiliated bio-metrics testing laboratory, and evaluated using a population of 363 naive human subjects recruited from the general public. The performance of each system was examined based on measured throughput, capture capability, matching capability, and user satisfaction metrics. This research documents the performance of unmanned face and face/iris systems required to maintain an average total subject interaction time of less than 10 seconds. The results highlight discrepancies between the performance of biometric systems as anticipated by the system designers and the measured performance, indicating an incomplete understanding of the main determinants of system performance. Our research shows that failure-to-acquire errors, unpredicted by system designers, were the main driver of non-identification rates instead of failure-to-match errors, which were better predicted. This outcome indicates the need for a renewed focus on reducing the failure-to-acquire rate in high-throughput, unmanned biometric systems.
Modern cyber-physical systems are increasingly complex and vulnerable to attacks like false data injection aimed at destabilizing and confusing the systems. We develop and evaluate an attack-detection framework aimed at learning a dynamic invariant network, data-driven temporal causal relationships between components of cyber-physical systems. We evaluate the relative performance in attack detection of the proposed model relative to traditional anomaly detection approaches. In this paper, we introduce Granger Causality based Kalman Filter with Adaptive Robust Thresholding (G-KART) as a framework for anomaly detection based on data-driven functional relationships between components in cyber-physical systems. In particular, we select power systems as a critical infrastructure with complex cyber-physical systems whose protection is an essential facet of national security. The system presented is capable of learning with or without network topology the task of detection of false data injection attacks in power systems. Kalman filters are used to learn and update the dynamic state of each component in the power system and in-turn monitor the component for malicious activity. The ego network for each node in the invariant graph is treated as an ensemble model of Kalman filters, each of which captures a subset of the node's interactions with other parts of the network. We finally also introduce an alerting mechanism to surface alerts about compromised nodes.
Image retrieval systems have been an active area of research for more than thirty years progressively producing improved algorithms that improve performance metrics, operate in different domains, take advantage of different features extracted from the images to be retrieved, and have different desirable invariance properties. With the ever-growing visual databases of images and videos produced by a myriad of devices comes the challenge of selecting effective features and performing fast retrieval on such databases. In this paper, we incorporate Fourier descriptors (FD) along with a metric-based balanced indexing tree as a viable solution to DHS (Department of Homeland Security) needs to for quick identification and retrieval of weapon images. The FDs allow a simple but effective outline feature representation of an object, while the M-tree provide a dynamic, fast, and balanced search over such features. Motivated by looking for applications of interest to DHS, we have created a basic guns and rifles databases that can be used to identify weapons in images and videos extracted from media sources. Our simulations show excellent performance in both representation and fast retrieval speed.
In recent years, the usage of unmanned aircraft systems (UAS) for security-related purposes has increased, ranging from military applications to different areas of civil protection. The deployment of UAS can support security forces in achieving an enhanced situational awareness. However, in order to provide useful input to a situational picture, sensor data provided by UAS has to be integrated with information about the area and objects of interest from other sources. The aim of this study is to design a high-level data fusion component combining probabilistic information processing with logical and probabilistic reasoning, to support human operators in their situational awareness and improving their capabilities for making efficient and effective decisions. To this end, a fusion component based on the ISR (Intelligence, Surveillance and Reconnaissance) Analytics Architecture (ISR-AA) [1] is presented, incorporating an object-oriented world model (OOWM) for information integration, an expressive knowledge model and a reasoning component for detection of critical events. Approaches for translating the information contained in the OOWM into either an ontology for logical reasoning or a Markov logic network for probabilistic reasoning are presented.
Guidelines, directives, and policy statements are usually presented in ``linear'' text form - word after word, page after page. However necessary, this practice impedes full understanding, obscures feedback dynamics, hides mutual dependencies and cascading effects and the like, - even when augmented with tables and diagrams. The net result is often a checklist response as an end in itself. All this creates barriers to intended realization of guidelines and undermines potential effectiveness. We present a solution strategy using text as ``data'', transforming text into a structured model, and generate a network views of the text(s), that we then can use for vulnerability mapping, risk assessments and control point analysis. We apply this approach using two NIST reports on cybersecurity of smart grid, more than 600 pages of text. Here we provide a synopsis of approach, methods, and tools. (Elsewhere we consider (a) system-wide level, (b) aviation e-landscape, (c) electric vehicles, and (d) SCADA for smart grid).
In this paper, we study the security and system congestion in a risk-based checkpoint screening system with two kinds of inspection queues, named as Selectee Lanes and Normal Lanes. Based on the assessed threat value, the arrival crossing the security checkpoints is classified as either a selectee or a non-selectee. The Selectee Lanes with enhanced scrutiny are used to check selectees, while Normal Lanes are used to check non-selectees. The goal of the proposed modelling framework is to minimize the system congestion under the constraints of total security and limited budget. The system congestion of the checkpoint screening system is determined through a steady-state analysis of multi-server queueing models. By solving an optimization model, we can determine the optimal threshold for differentiating the arrivals, and determine the optimal number of security devices for each type of inspection queues. The analysis conducted in this study contributes managerial insights for understanding the operation and system performance of such risk-based checkpoint screening systems.
In order to enhance the supply chain security at airports, the German federal ministry of education and research has initiated the project ESECLOG (enhanced security in the air cargo chain) which has the goal to improve the threat detection accuracy using one-sided access methods. In this paper, we present a new X-ray backscatter technology for non-intrusive imaging of suspicious objects (mainly low-Z explosives) in luggage's and parcels with only a single-sided access. A key element in this technology is the X-ray backscatter camera embedded with a special twisted-slit collimator. The developed technology has efficiently resolved the problem related to the imaging of complex interior of the object by fixing source and object positions and changing only the scanning direction of the X-ray backscatter camera. Experiments were carried out on luggages and parcels packed with mock-up dangerous materials including liquid and solid explosive simulants. In addition, the quality of the X-ray backscatter image was enhanced by employing high-resolution digital detector arrays. Experimental results are discussed and the efficiency of the present technique to detect suspicious objects in luggages and parcels is demonstrated. At the end, important applications of the proposed backscatter imaging technology to the aviation security are presented.
The safety, security, and resilience of international postal, shipping, and transportation critical infrastructure are vital to the global supply chain that enables worldwide commerce and communications. But security on an international scale continues to fail in the face of new threats, such as the discovery by Panamanian authorities of suspected components of a surface-to-air missile system aboard a North Korean-flagged ship in July 2013 [1].This reality calls for new and innovative approaches to critical infrastructure security. Owners and operators of critical postal, shipping, and transportation operations need new methods to identify, assess, and mitigate security risks and gaps in the most effective manner possible.
The research question of this study is: How Integration Readiness Level (IRL) metrics can be understood and realized in the domain of border control information systems. The study address to the IRL metrics and their definition, criteria, references, and questionnaires for validation of border control information systems in case of the shared maritime situational awareness system. The target of study is in improvements of ways for acceptance, operational validation, risk assessment, and development of sharing mechanisms and integration of information systems and border control information interactions and collaboration concepts in Finnish national and European border control domains.
This paper reports the results and findings of a historical analysis of open source intelligence (OSINT) information (namely Twitter data) surrounding the events of the September 11, 2012 attack on the US Diplomatic mission in Benghazi, Libya. In addition to this historical analysis, two prototype capabilities were combined for a table top exercise to explore the effectiveness of using OSINT combined with a context aware handheld situational awareness framework and application to better inform potential responders as the events unfolded. Our experience shows that the ability to model sentiment, trends, and monitor keywords in streaming social media, coupled with the ability to share that information to edge operators can increase their ability to effectively respond to contingency operations as they unfold.
In recent years, Attribute Based Access Control (ABAC) has evolved as the preferred logical access control methodology in the Department of Defense and Intelligence Community, as well as many other agencies across the federal government. Gartner recently predicted that “by 2020, 70% of enterprises will use attribute-based access control (ABAC) as the dominant mechanism to protect critical assets, up from less that 5% today.” A definition and introduction to ABAC can be found in NIST Special Publication 800-162, Guide to Attribute Based Access Control (ABAC) Definition and Considerations and Intelligence Community Policy Guidance (ICPG) 500.2, Attribute-Based Authorization and Access Management. Within ABAC, attributes are used to make critical access control decisions, yet standards for attribute assurance have just started to be researched and documented. This presentation outlines factors influencing attributes that an authoritative body must address when standardizing attribute assurance and proposes some notional implementation suggestions for consideration. Attribute Assurance brings a level of confidence to attributes that is similar to levels of assurance for authentication (e.g., guidelines specified in NIST SP 800-63 and OMB M-04-04). There are three principal areas of interest when considering factors related to Attribute Assurance. Accuracy establishes the policy and technical underpinnings for semantically and syntactically correct descriptions of Subjects, Objects, or Environmental conditions. Interoperability considers different standards and protocols used for secure sharing of attributes between systems in order to avoid compromising the integrity and confidentiality of the attributes or exposing vulnerabilities in provider or relying systems or entities. Availability ensures that the update and retrieval of attributes satisfy the application to which the ABAC system is applied. In addition, the security and backup capability of attribute repositories need to be considered. Similar to a Level of Assurance (LOA), a Level of Attribute Assurance (LOAA) assures a relying party that the attribute value received from an Attribute Provider (AP) is accurately associated with the subject, resource, or environmental condition to which it applies. An Attribute Provider (AP) is any person or system that provides subject, object (or resource), or environmental attributes to relying parties regardless of transmission method. The AP may be the original, authoritative source (e.g., an Applicant). The AP may also receive information from an authoritative source for repacking or store-and-forward (e.g., an employee database) to relying parties or they may derive the attributes from formulas (e.g., a credit score). Regardless of the source of the AP's attributes, the same standards should apply to determining the LOAA. As ABAC is implemented throughout government, attribute assurance will be a critical, limiting factor in its acceptance. With this presentation, we hope to encourage dialog between attribute relying parties, attribute providers, and federal agencies that will be defining standards for ABAC in the immediate future.