Biblio
The United States has US CYBERCOM to protect the US Military Infrastructure and DHS to protect the nation's critical cyber infrastructure. These organizations deal with wide ranging issues at a national level. This leaves local and state governments to largely fend for themselves in the cyber frontier. This paper will focus on how to determine the threat to a community and what indications and warnings can lead us to suspect an attack is underway. To try and help answer these questions we utilized the concepts of Honey pots and Honey nets and extended them to a multi-organization concept within a geographic boundary to form a Honey Community. The initial phase of the research done in support of this paper was to create a fictitious community with various components to entice would-be attackers and determine if the use of multiple sectors in a community would aid in the determination of an attack.
Radio-frequency identification (RFID) are becoming a part of our everyday life with a wide range of applications such as labeling products and supply chain management and etc. These smart and tiny devices have extremely constrained resources in terms of area, computational abilities, memory, and power. At the same time, security and privacy issues remain as an important problem, thus with the large deployment of low resource devices, increasing need to provide security and privacy among such devices, has arisen. Resource-efficient cryptographic incipient become basic for realizing both security and efficiency in constrained environments and embedded systems like RFID tags and sensor nodes. Among those primitives, lightweight block cipher plays a significant role as a building block for security systems. In 2014 Manoj Kumar et al proposed a new Lightweight block cipher named as FeW, which are suitable for extremely constrained environments and embedded systems. In this paper, we simulate and synthesize the FeW block cipher. Implementation results of the FeW cryptography algorithm on a FPGA are presented. The design target is efficiency of area and cost.
Networked systems have adapted Radio Frequency identification technology (RFID) to automate their business process. The Networked RFID Systems (NRS) has some unique characteristics which raise new privacy and security concerns for organizations and their NRS systems. The businesses are always having new realization of business needs using NRS. One of the most recent business realization of NRS implementation on large scale distributed systems (such as Internet of Things (IoT), supply chain) is to ensure visibility and traceability of the object throughout the chain. However, this requires assurance of security and privacy to ensure lawful business operation. In this paper, we are proposing a secure tracker protocol that will ensure not only visibility and traceability of the object but also genuineness of the object and its travel path on-site. The proposed protocol is using Physically Unclonable Function (PUF), Diffie-Hellman algorithm and simple cryptographic primitives to protect privacy of the partners, injection of fake objects, non-repudiation, and unclonability. The tag only performs a simple mathematical computation (such as combination, PUF and division) that makes the proposed protocol suitable to passive tags. To verify our security claims, we performed experiment on Security Protocol Description Language (SPDL) model of the proposed protocol using automated claim verification tool Scyther. Our experiment not only verified our claims but also helped us to eliminate possible attacks identified by Scyther.
At the RELENG 2014 Q&A, the question was asked, “What is your greatest concern?” and the response was “someone subverting our deployment pipeline”. That is the motivation for this paper. We explore what it means to subvert a pipeline and provide several different scenarios of subversion. We then focus on the issue of securing a pipeline. As a result, we provide an engineering process that is based on having trusted components mediate access to sensitive portions of the pipeline from other components, which can remain untrusted. Applying our process to a pipeline we constructed involving Chef, Jenkins, Docker, Github, and AWS, we find that some aspects of our process result in easy to make changes to the pipeline, whereas others are more difficult. Consequently, we have developed a design that hardens the pipeline, although it does not yet completely secure it.
This paper introduces a research agenda focusing on cybersecurity in the context of product lifecycle management. The paper discusses research directions on critical protection techniques, including protection techniques from insider threat, access control systems, secure supply chains and remote 3D printing, compliance techniques, and secure collaboration techniques. The paper then presents an overview of DBSAFE, a system for protecting data from insider threat.
Cloud computing is a service-based computing resources sourcing model that is changing the way in which companies deploy and operate information and communication technologies (ICT). This model introduces several advantages compared with traditional environments along with typical outsourcing benefits reshaping the ICT services supply chain by creating a more dynamic ICT environment plus a broader variety of service offerings. This leads to higher risk of disruption and brings additional challenges for organisational resilience, defined herein as the ability of organisations to survive and also to thrive when exposed to disruptive incidents. This paper draws on supply chain theory and supply chain resilience concepts in order to identify a set of coordination mechanisms that positively impact ICT operational resilience processes within cloud supply chains and packages them into a conceptual model.
In recent years, The vulnerability of agricultural products chain is been exposed because of the endlessly insecure events appeared in every areas and every degrees from the natural disasters on the each node operation of agricultural products supply chain in recently years. As an very important place of HUNAN Province because of its abundant agricultural products, the Eastern Area's security in agricultural products supply chain was related to the safety and stability of economic development in the entire region. In order to make the more objective, scientific, practical of risk management in the empirical analysis, This item is based on the AHP-FCS method to deal with the qualitative to quantitative analysis about risk management of agricultural product supply chain, to identify and evaluate the probability and severity of all the risk possibility.
The QR codes have gained wide popularity in mobile marketing and advertising campaigns. However, the hidden security threat on the involved information system might endanger QR codes' success, and this issue has not been adequately addressed. In this paper we propose to examine the life cycle of a redesigned QR code ecosystem to identify the possible security risks. On top of this examination, we further propose standard changes to enhance security through a digital signature mechanism.
With the increasing popularity of wearable devices, information becomes much easily available. However, personal information sharing still poses great challenges because of privacy issues. We propose an idea of Visual Human Signature (VHS) which can represent each person uniquely even captured in different views/poses by wearable cameras. We evaluate the performance of multiple effective modalities for recognizing an identity, including facial appearance, visual patches, facial attributes and clothing attributes. We propose to emphasize significant dimensions and do weighted voting fusion for incorporating the modalities to improve the VHS recognition. By jointly considering multiple modalities, the VHS recognition rate can reach by 51% in frontal images and 48% in the more challenging environment and our approach can surpass the baseline with average fusion by 25% and 16%. We also introduce Multiview Celebrity Identity Dataset (MCID), a new dataset containing hundreds of identities with different view and clothing for comprehensive evaluation.
Automation systems are gaining popularity around the world. The use of these powerful technologies for home security has been proposed and some systems have been developed. Other implementations see the user taking a central role in providing and receiving updates to the system. We propose a system making use of an Android based smartphone as the user control point. Our Android application allows for dual factor (facial and secret pin) based authentication in order to protect the privacy of the user. The system successfully implements facial recognition on the limited resources of a smartphone by making use of the Eigenfaces algorithm. The system we created was designed for home automation but makes use of technologies that allow it to be applied within any environment. This opens the possibility for more research into dual factor authentication and the architecture of our system provides a blue print for the implementation of home based automation systems. This system with minimal modifications can be applied within an industrial application.
Automated human facial image de-identification is a much needed technology for privacy-preserving social media and intelligent surveillance applications. Other than the usual face blurring techniques, in this work, we propose to achieve facial anonymity by slightly modifying existing facial images into "averaged faces" so that the corresponding identities are difficult to uncover. This approach preserves the aesthesis of the facial images while achieving the goal of privacy protection. In particular, we explore a deep learning-based facial identity-preserving (FIP) features. Unlike conventional face descriptors, the FIP features can significantly reduce intra-identity variances, while maintaining inter-identity distinctions. By suppressing and tinkering FIP features, we achieve the goal of k-anonymity facial image de-identification while preserving desired utilities. Using a face database, we successfully demonstrate that the resulting "averaged faces" will still preserve the aesthesis of the original images while defying facial image identity recognition.
Optical Coherence Tomography (OCT) has shown a great potential as a complementary imaging tool in the diagnosis of skin diseases. Speckle noise is the most prominent artifact present in OCT images and could limit the interpretation and detection capabilities. In this work we evaluate various denoising filters with high edge-preserving potential for the reduction of speckle noise in 256 dermatological OCT B-scans. Our results show that the Enhanced Sigma Filter and the Block Matching 3-D (BM3D) as 2D denoising filters and the Wavelet Multiframe algorithm considering adjacent B-scans achieved the best results in terms of the enhancement quality metrics used. Our results suggest that a combination of 2D filtering followed by a wavelet based compounding algorithm may significantly reduce speckle, increasing signal-to-noise and contrast-to-noise ratios, without the need of extra acquisitions of the same frame.
This paper describes Smartpig, an algorithm for the iterative mosaicking of images of a planar surface using a unique parameterization which decomposes inter-image projective warps into camera intrinsics, fronto-parallel projections, and inter-image similarities. The constraints resulting from the inter-image alignments within an image set are stored in an undirected graph structure allowing efficient optimization of image projections on the plane. Camera pose is also directly recoverable from the graph, making Smartpig a feasible solution to the problem of simultaneous location and mapping (SLAM). Smartpig is demonstrated on a set of 144 high resolution aerial images and evaluated with a number of metrics against ground control.
A number of blind Image Quality Evaluation Metrics (IQEMs) for Unmanned Aerial Vehicle (UAV) photograph application are presented. Nowadays, the visible light camera is widely used for UAV photograph application because of its vivid imaging effect; however, the outdoor environment light will produce great negative influences on its imaging output unfortunately. In this paper, to conquer this problem above, we design and reuse a series of blind IQEMs to analyze the imaging quality of UAV application. The Human Visual System (HVS) based IQEMs, including the image brightness level, the image contrast level, the image noise level, the image edge blur level, the image texture intensity level, the image jitter level, and the image flicker level, are all considered in our application. Once these IQEMs are calculated, they can be utilized to provide a computational reference for the following image processing application, such as image understanding and recognition. Some preliminary experiments for image enhancement have proved the correctness and validity of our proposed technique.
Performance characterization of stereo methods is mandatory to decide which algorithm is useful for which application. Prevalent benchmarks mainly use the root mean squared error (RMS) with respect to ground truth disparity maps to quantify algorithm performance. We show that the RMS is of limited expressiveness for algorithm selection and introduce the HCI Stereo Metrics. These metrics assess stereo results by harnessing three semantic cues: depth discontinuities, planar surfaces, and fine geometric structures. For each cue, we extract the relevant set of pixels from existing ground truth. We then apply our evaluation functions to quantify characteristics such as edge fattening and surface smoothness. We demonstrate that our approach supports practitioners in selecting the most suitable algorithm for their application. Using the new Middlebury dataset, we show that rankings based on our metrics reveal specific algorithm strengths and weaknesses which are not quantified by existing metrics. We finally show how stacked bar charts and radar charts visually support multidimensional performance evaluation. An interactive stereo benchmark based on the proposed metrics and visualizations is available at: http://hci.iwr.uni-heidelberg.de/stereometrics.
In practical reflector antenna structures, components of the back-up structure (BUS) are selected form a standard steel library which is normally manufactured. In this case, the design problem of the antenna structure is a discrete optimization problem. In most cases, discrete design is solved by heuristic-based algorithm which will be computing-expensive when the number of deign variable increases. In this paper, a continuous method is used to transfer the discrete optimization problem to a continuous one and gradient-based technique is utilized to solve this problem. The method proposed can achieve a well antenna surface accuracy with all components selected from a standard cross-section list, which is shown by a 9m diameter antenna optimization problem.
Data cleaning is frequently an iterative process tailored to the requirements of a specific analysis task. The design and implementation of iterative data cleaning tools presents novel challenges, both technical and organizational, to the community. In this paper, we present results from a user survey (N = 29) of data analysts and infrastructure engineers from industry and academia. We highlight three important themes: (1) the iterative nature of data cleaning, (2) the lack of rigor in evaluating the correctness of data cleaning, and (3) the disconnect between the analysts who query the data and the infrastructure engineers who design the cleaning pipelines. We conclude by presenting a number of recommendations for future work in which we envision an interactive data cleaning system that accounts for the observed challenges.
Poor data quality has become a persistent challenge for organizations as data continues to grow in complexity and size. Existing data cleaning solutions focus on identifying repairs to the data to minimize either a cost function or the number of updates. These techniques, however, fail to consider underlying data privacy requirements that exist in many real data sets containing sensitive and personal information. In this demonstration, we present PARC, a Privacy-AwaRe data Cleaning system that corrects data inconsistencies w.r.t. a set of FDs, and limits the disclosure of sensitive values during the cleaning process. The system core contains modules that evaluate three key metrics during the repair search, and solves a multi-objective optimization problem to identify repairs that balance the privacy vs. utility tradeoff. This demonstration will enable users to understand: (1) the characteristics of a privacy-preserving data repair; (2) how to customize data cleaning and data privacy requirements using two real datasets; and (3) the distinctions among the repair recommendations via visualization summaries.
We present Falcon, an interactive, deterministic, and declarative data cleaning system, which uses SQL update queries as the language to repair data. Falcon does not rely on the existence of a set of pre-defined data quality rules. On the contrary, it encourages users to explore the data, identify possible problems, and make updates to fix them. Bootstrapped by one user update, Falcon guesses a set of possible sql update queries that can be used to repair the data. The main technical challenge addressed in this paper consists in finding a set of sql update queries that is minimal in size and at the same time fixes the largest number of errors in the data. We formalize this problem as a search in a lattice-shaped space. To guarantee that the chosen updates are semantically correct, Falcon navigates the lattice by interacting with users to gradually validate the set of sql update queries. Besides using traditional one-hop based traverse algorithms (e.g., BFS or DFS), we describe novel multi-hop search algorithms such that Falcon can dive over the lattice and conduct the search efficiently. Our novel search strategy is coupled with a number of optimization techniques to further prune the search space and efficiently maintain the lattice. We have conducted extensive experiments using both real-world and synthetic datasets to show that Falcon can effectively communicate with users in data repairing.
We analyze the workload from a multi-year deployment of a database-as-a-service platform targeting scientists and data scientists with minimal database experience. Our hypothesis was that relatively minor changes to the way databases are delivered can increase their use in ad hoc analysis environments. The web-based SQLShare system emphasizes easy dataset-at-a-time ingest, relaxed schemas and schema inference, easy view creation and sharing, and full SQL support. We find that these features have helped attract workloads typically associated with scripts and files rather than relational databases: complex analytics, routine processing pipelines, data publishing, and collaborative analysis. Quantitatively, these workloads are characterized by shorter dataset "lifetimes", higher query complexity, and higher data complexity. We report on usage scenarios that suggest SQL is being used in place of scripts for one-off data analysis and ad hoc data sharing. The workload suggests that a new class of relational systems emphasizing short-term, ad hoc analytics over engineered schemas may improve uptake of database technology in data science contexts. Our contributions include a system design for delivering databases into these contexts, a description of a public research query workload dataset released to advance research in analytic data systems, and an initial analysis of the workload that provides evidence of new use cases under-supported in existing systems.
Volume anomaly such as distributed denial-of-service (DDoS) has been around for ages but with advancement in technologies, they have become stronger, shorter and weapon of choice for attackers. Digital forensic analysis of intrusions using alerts generated by existing intrusion detection system (IDS) faces major challenges, especially for IDS deployed in large networks. In this paper, the concept of automatically sifting through a huge volume of alerts to distinguish the different stages of a DDoS attack is developed. The proposed novel framework is purpose-built to analyze multiple logs from the network for proactive forecast and timely detection of DDoS attacks, through a combined approach of Shannon-entropy concept and clustering algorithm of relevant feature variables. Experimental studies on a cyber-range simulation dataset from the project industrial partners show that the technique is able to distinguish precursor alerts for DDoS attacks, as well as the attack itself with a very low false positive rate (FPR) of 22.5%. Application of this technique greatly assists security experts in network analysis to combat DDoS attacks.
Cybersecurity is a problem of growing relevance that impacts all facets of society. As a result, many researchers have become interested in studying cybercriminals and online hacker communities in order to develop more effective cyber defenses. In particular, analysis of hacker community contents may reveal existing and emerging threats that pose great risk to individuals, businesses, and government. Thus, we are interested in developing an automated methodology for identifying tangible and verifiable evidence of potential threats within hacker forums, IRC channels, and carding shops. To identify threats, we couple machine learning methodology with information retrieval techniques. Our approach allows us to distill potential threats from the entirety of collected hacker contents. We present several examples of identified threats found through our analysis techniques. Results suggest that hacker communities can be analyzed to aid in cyber threat detection, thus providing promising direction for future work.
In this paper, the principle of the kernel extreme learning machine (ELM) is analyzed. Based on that, we introduce a kind of multi-scale wavelet kernel extreme learning machine classifier and apply it to electroencephalographic (EEG) signal feature classification. Experiments show that our classifier achieves excellent performance.
The explosive growth of IT infrastructures, cloud systems, and Internet of Things (IoT) have resulted in complex systems that are extremely difficult to secure and protect against cyberattacks which are growing exponentially in complexity and in number. Overcoming the cybersecurity challenges is even more complicated due to the lack of training and widely available cybersecurity environments to experiment with and evaluate new cybersecurity methods. The goal of our research is to address these challenges by exploiting cloud services. In this paper, we present the design, analysis, and evaluation of a cloud service that we refer to as Cybersecurity Lab as a Service (CLaaS) which offers virtual cybersecurity experiments that can be accessed from anywhere and from any device (desktop, laptop, tablet, smart mobile device, etc.) with Internet connectivity. In CLaaS, we exploit cloud computing systems and virtualization technologies to provide virtual cybersecurity experiments and hands-on experiences on how vulnerabilities are exploited to launch cyberattacks, how they can be removed, and how cyber resources and services can be hardened or better protected. We also present our experimental results and evaluation of CLaaS virtual cybersecurity experiments that have been used by graduate students taking our cybersecurity class as well as by high school students participating in GenCyber camps.
Intrusion Detection Systems (IDSs) are an important defense tool against the sophisticated and ever-growing network attacks. These systems need to be evaluated against high quality datasets for correctly assessing their usefulness and comparing their performance. We present an Intrusion Detection Dataset Toolkit (ID2T) for the creation of labeled datasets containing user defined synthetic attacks. The architecture of the toolkit is provided for examination and the example of an injected attack, in real network traffic, is visualized and analyzed. We further discuss the ability of the toolkit of creating realistic synthetic attacks of high quality and low bias.