Biblio
We introduce Active Dependency Mapping (ADM), a method for establishing dependency relations among a set of interdependent services. The approach is to artificially degrade network performance to infer which assets on the network support a particular process. Artificial degradation of the network environment could be transparent to users; run continuously it could identify dependencies that are rare or occur only at certain timescales. A useful byproduct of this dependency analysis is a quantitative assessment of the resilience and robustness of the system. This technique is intriguing for hardening both enterprise networks and cyber physical systems. We present a proof-of-concept experiment executed on a real-world set of interrelated software services. We assess the efficacy of the approach, discuss current limitations, and suggest options for future development of ADM.
Network systems, such as transportation systems and water supply systems, play important roles in our daily life and industrial production. However, a variety of disruptive events occur during their life time, causing a series of serious losses. Due to the inevitability of disruption, we should not only focus on improving the reliability or the resistance of the system, but also pay attention to the ability of the system to response timely and recover rapidly from disruptive events. That is to say we need to pay more attention to the resilience. In this paper, we describe two resilience models, quotient resilience and integral resilience, to measure the final recovered performance and the performance cumulative process during recovery respectively. Based on these two models, we implement the optimization of the system recovery strategies after disruption, focusing on the repair sequence of the damaged components and the allocation scheme of resource. The proposed research in this paper can serve as guidance to prioritize repair tasks and allocate resource reasonably.
Nowadays, the principle of image mining plays a vital role in various areas of our life, where numerous frameworks based on image mining are proposed for object recognition, object tracking, sensing images and medical image diagnosis. Nevertheless, the research in the image authentication based on image mining is still confined. Therefore, this paper comes to present an efficient engagement between the frequent pattern mining and digital watermarking to contribute significantly in the authentication of images transmitted via public networks. The proposed framework exploits some robust features of image to extract the frequent patterns in the image data. The maximal relevant patterns are used to discriminate between the textured and smooth blocks within the image, where the texture blocks are more appropriate to embed the secret data than smooth blocks. The experiment's result proves the efficiency of the proposed framework in terms of stabilization and robustness against different kind of attacks. The results are interesting and remarkable to preserve the image authentication.
Decreasing hardware reliability makes robust firmware imperative for safety-critical applications. Hence, ensuring correct handling of errors in peripherals is a key objective during firmware design. To adequately support robustness considerations of firmware designers during implementation, an efficient qualitative fault injection method is required. This paper presents a high-speed fault injection technique based on host-compiled firmware simulation that is suitable to analyze the impact of transient faults on firmware behavior. Additionally, fault set reduction by static code analysis avoids unnecessary injection of masked and equivalent faults. Application of the proposed fault injection technique on an industrial safety-relevant automotive system-on-chip (SoC) firmware demonstrates at least three orders of magnitude speedup compared to instruction set level. In addition, a fault set reduction by 78% is achieved. While significantly reducing the required fault injection time, the presented techniques provide as accurate feedback to the designer as existing state-of-the-art approaches.
With the increasing popularity of wearable devices, information becomes much easily available. However, personal information sharing still poses great challenges because of privacy issues. We propose an idea of Visual Human Signature (VHS) which can represent each person uniquely even captured in different views/poses by wearable cameras. We evaluate the performance of multiple effective modalities for recognizing an identity, including facial appearance, visual patches, facial attributes and clothing attributes. We propose to emphasize significant dimensions and do weighted voting fusion for incorporating the modalities to improve the VHS recognition. By jointly considering multiple modalities, the VHS recognition rate can reach by 51% in frontal images and 48% in the more challenging environment and our approach can surpass the baseline with average fusion by 25% and 16%. We also introduce Multiview Celebrity Identity Dataset (MCID), a new dataset containing hundreds of identities with different view and clothing for comprehensive evaluation.
We consider a class of robust optimization problems that we call “robust-to-dynamics optimization” (RDO). The input to an RDO problem is twofold: (i) a mathematical program (e.g., an LP, SDP, IP, etc.), and (ii) a dynamical system (e.g., a linear, nonlinear, discrete, or continuous dynamics). The objective is to maximize over the set of initial conditions that forever remain feasible under the dynamics. The focus of this paper is on the case where the optimization problem is a linear program and the dynamics are linear. We establish some structural properties of the feasible set and prove that if the linear system is asymptotically stable, then the RDO problem can be solved in polynomial time. We also outline a semidefinite programming based algorithm for providing upper bounds on robust-to-dynamics linear programs.
We consider the problem of robust on-line optimization of a class of continuous-time nonlinear systems by using a discrete-time controller/optimizer, interconnected with the plant in a sampled-data structure. In contrast to classic approaches where the controller is updated after a fixed sufficiently long waiting time has passed, we design an event-based mechanism that triggers the control action only when the rate of change of the output of the plant is sufficiently small. By using this event-based update rule, a significant improvement in the convergence rate of the closed-loop dynamics is achieved. Since the closed-loop system combines discrete-time and continuous-time dynamics, and in order to guarantee robustness and semi-continuous dependence of solutions on parameters and initial conditions, we use the framework of hybrid set-valued dynamical systems to analyze the stability properties of the system. Numerical simulations illustrate the results.
Risk-control optimization has great significance for security of power system. Usually the probabilistic uncertainties of parameters are considered in the research of risk optimization of power system. However, the method of probabilistic uncertainty description will be insufficient in the case of lack of sample data. Thus non-probabilistic uncertainties of parameters should be considered, and will impose a significant influence on the results of optimization. To solve this problem, a robust optimization operation method of power system risk-control is presented in this paper, considering the non-probabilistic uncertainty of parameters based on information gap decision theory (IGDT). In the method, loads are modeled as the non-probabilistic uncertainty parameters, and the model of robust optimization operation of risk-control is presented. By solving the model, the maximum fluctuation of the pre-specified target can be obtained, and the strategy of this situation can be obtained at the same time. The proposed model is applied to the IEEE-30 system of risk-control by simulation. The results can provide the valuable information for operating department to risk management.
Selective encryption designates a technique that aims at scrambling a message content while preserving its syntax. Such an approach allows encryption to be transparent towards middle-box and/or end user devices, and to easily fit within existing pipelines. In this paper, we propose to apply this property to a real-time diffusion scenario - or broadcast - over a RTP session. The main challenge of such problematic is the preservation of the synchronization between encryption and decryption. Our solution is based on the Advanced Encryption Standard in counter mode which has been modified to fit our auto-synchronization requirement. Setting up the proposed synchronization scheme does not induce any latency, and requires no additional bandwidth in the RTP session (no additional information is sent). Moreover, its parallel structure allows to start decryption on any given frame of the video while leaving a lot of room for further optimization purposes.
Multichannel sensor systems are widely used in condition monitoring for effective failure prevention of critical equipment or processes. However, loss of sensor readings due to malfunctions of sensors and/or communication has long been a hurdle to reliable operations of such integrated systems. Moreover, asynchronous data sampling and/or limited data transmission are usually seen in multiple sensor channels. To reliably perform fault diagnosis and prognosis in such operating environments, a data recovery method based on functional principal component analysis (FPCA) can be utilized. However, traditional FPCA methods are not robust to outliers and their capabilities are limited in recovering signals with strongly skewed distributions (i.e., lack of symmetry). This paper provides a robust data-recovery method based on functional data analysis to enhance the reliability of multichannel sensor systems. The method not only considers the possibly skewed distribution of each channel of signal trajectories, but is also capable of recovering missing data for both individual and correlated sensor channels with asynchronous data that may be sparse as well. In particular, grand median functions, rather than classical grand mean functions, are utilized for robust smoothing of sensor signals. Furthermore, the relationship between the functional scores of two correlated signals is modeled using multivariate functional regression to enhance the overall data-recovery capability. An experimental flow-control loop that mimics the operation of coolant-flow loop in a multimodular integral pressurized water reactor is used to demonstrate the effectiveness and adaptability of the proposed data-recovery method. The computational results illustrate that the proposed method is robust to outliers and more capable than the existing FPCA-based method in terms of the accuracy in recovering strongly skewed signals. In addition, turbofan engine data are also analyzed to verify the capability of the proposed method in recovering non-skewed signals.
Cyber-physical systems (CPS) can potentially benefit a wide array of applications and areas. Here, the authors look at some of the challenges surrounding CPS, and consider a feasible solution for creating a robust, secure, and cost-effective architecture.
The EIIM model for ER allows for creation and maintenance of persistent entity identity structures. It accomplishes this through a collection of batch configurations that allow updates and asserted fixes to be made to the Identity knowledgebase (IKB). The model also provides a batch IR configuration that provides no maintenance activity but instead allows access to the identity information. This batch IR configuration is limited in a few ways. It is driven by the same rules used for maintaining the IKB, has no inherent method to identity "close" matches, and can only identify and return the positive matches. Through the decoupling of this configuration and its movements into an interactive role under the umbrella of an Identity Management Service, a more robust access method can be provided for the use of identity information. This more robust access to the information improved the quality of the information along multiple Information Quality dimensions.
Robust image hashing seeks to transform a given input image into a shorter hashed version using a key-dependent non-invertible transform. These hashes find extensive applications in content authentication, image indexing for database search and watermarking. Modern robust hashing algorithms consist of feature extraction, a randomization stage to introduce non-invertibility, followed by quantization and binary encoding to produce a binary hash. This paper describes a novel algorithm for generating an image hash based on Log-Polar transform features. The Log-Polar transform is a part of the Fourier-Mellin transformation, often used in image recognition and registration techniques due to its invariant properties to geometric operations. First, we show that the proposed perceptual hash is resistant to content-preserving operations like compression, noise addition, moderate geometric and filtering. Second, we illustrate the discriminative capability of our hash in order to rapidly distinguish between two perceptually different images. Third, we study the security of our method for image authentication purposes. Finally, we show that the proposed hashing method can provide both excellent security and robustness.
Similarity search plays an important role in many applications involving high-dimensional data. Due to the known dimensionality curse, the performance of most existing indexing structures degrades quickly as the feature dimensionality increases. Hashing methods, such as locality sensitive hashing (LSH) and its variants, have been widely used to achieve fast approximate similarity search by trading search quality for efficiency. However, most existing hashing methods make use of randomized algorithms to generate hash codes without considering the specific structural information in the data. In this paper, we propose a novel hashing method, namely, robust hashing with local models (RHLM), which learns a set of robust hash functions to map the high-dimensional data points into binary hash codes by effectively utilizing local structural information. In RHLM, for each individual data point in the training dataset, a local hashing model is learned and used to predict the hash codes of its neighboring data points. The local models from all the data points are globally aligned so that an optimal hash code can be assigned to each data point. After obtaining the hash codes of all the training data points, we design a robust method by employing ℓ2,1-norm minimization on the loss function to learn effective hash functions, which are then used to map each database point into its hash code. Given a query data point, the search process first maps it into the query hash code by the hash functions and then explores the buckets, which have similar hash codes to the query hash code. Extensive experimental results conducted on real-life datasets show that the proposed RHLM outperforms the state-of-the-art methods in terms of search quality and efficiency.
Watermarking is a recently developed technique which is currently dominating the world of security and digital processing in order to ensure the protection of digitized trade. The purpose of this work is twofold. It is firstly to establish a state of the art that goes through the existing watermarking methods and their performances. And secondly to design, implement and evaluate a new watermarking solution that aims to optimize the compromise robustness-invisibility-capacity. The proposed approach consists on applying a frequency watermarking based on singular value decomposition (SVD) and exploiting the mosaic made from all video frames as well as inserting a double signature in order to increase watermarking algorithm capacity.
A robust adaptive filtering algorithm based on the convex combination of two adaptive filters under the maximum correntropy criterion (MCC) is proposed. Compared with conventional minimum mean square error (MSE) criterion-based adaptive filtering algorithm, the MCC-based algorithm shows a better robustness against impulsive interference. However, its major drawback is the conflicting requirements between convergence speed and steady-state mean square error. In this letter, we use the convex combination method to overcome the tradeoff problem. Instead of minimizing the squared error to update the mixing parameter in conventional convex combination scheme, the method of maximizing the correntropy is introduced to make the proposed algorithm more robust against impulsive interference. Additionally, we report a novel weight transfer method to further improve the tracking performance. The good performance in terms of convergence rate and steady-state mean square error is demonstrated in plant identification scenarios that include impulsive interference and abrupt changes.
Very high resolution satellite imagery used to be a rare commodity, with infrequent satellite pass-over times over a specific area-of-interest obviating many useful applications. Today, more and more such satellite systems are available, with visual analysis and interpretation of imagery still important to derive relevant features and changes from satellite data. In order to allow efficient, robust and routine image analysis for humanitarian purposes, semi-automated feature extraction is of increasing importance for operational emergency mapping tasks. In the frame of the European Earth Observation program COPERNICUS and related research activities under the European Union's Seventh Framework Program, substantial scientific developments and mapping services are dedicated to satellite based humanitarian mapping and monitoring. In this paper, recent results in methodological research and development of routine services in satellite mapping for humanitarian situational awareness are reviewed and discussed. Ethical aspects of sensitivity and security of humanitarian mapping are deliberated. Furthermore methods for monitoring and analysis of refugee/internally displaced persons camps in humanitarian settings are assessed. Advantages and limitations of object-based image analysis, sample supervised segmentation and feature extraction are presented and discussed.
Programming languages have long incorporated type safety, increasing their level of abstraction and thus aiding programmers. Type safety eliminates whole classes of security-sensitive bugs, replacing the tedious and error-prone search for such bugs in each application with verifying the correctness of the type system. Despite their benefits, these protections often end at the process boundary, that is, type safety holds within a program but usually not to the file system or communication with other programs. Existing operating system approaches to bridge this gap require the use of a single programming language or common language runtime. We describe the deep integration of type safety in Ethos, a clean-slate operating system which requires that all program input and output satisfy a recognizer before applications are permitted to further process it. Ethos types are multilingual and runtime-agnostic, and each has an automatically generated unique type identifier. Ethos bridges the type-safety gap between programs by (1) providing a convenient mechanism for specifying the types each program may produce or consume, (2) ensuring that each type has a single, distributed-system-wide recognizer implementation, and (3) inescapably enforcing these type constraints.
This paper presents a unified approach for the detection of network anomalies. Current state of the art methods are often able to detect one class of anomalies at the cost of others. Our approach is based on using a Linear Dynamical System (LDS) to model network traffic. An LDS is equivalent to Hidden Markov Model (HMM) for continuous-valued data and can be computed using incremental methods to manage high-throughput (volume) and velocity that characterizes Big Data. Detailed experiments on synthetic and real network traces shows a significant improvement in detection capability over competing approaches. In the process we also address the issue of robustness of network anomaly detection systems in a principled fashion.
The Internet of Things is a vision that broadens the scope of the internet by incorporating physical objects to identify themselves to the participating entities. This innovative concept enables a physical device to represent itself in the digital world. There are a lot of speculations and future forecasts about the Internet of Things devices. However, most of them are vendor specific and lack a unified standard, which renders their seamless integration and interoperable operations. Another major concern is the lack of security features in these devices and their corresponding products. Most of them are resource-starved and unable to support computationally complex and resource consuming secure algorithms. In this paper, we have proposed a lightweight mutual authentication scheme which validates the identities of the participating devices before engaging them in communication for the resource observation. Our scheme incurs less connection overhead and provides a robust defence solution to combat various types of attacks.
Wireless network, whether it's ad-hoc or at enterprise level is vulnerable due to its features of open medium, and usually due to weak authentication, authorization, encryption, monitoring and accounting mechanisms. Various wireless vulnerability situations as well as the minimal features that are required in order to protect, monitor, account, authenticate, and authorize nodes, users, computers into the network are examined. Also, aspects of several IEEE Security Standards, which were ratified and which are still in draft are described.
The technology of vehicle video detecting and tracking has been playing an important role in the ITS (Intelligent Transportation Systems) field during recent years. The occlusion phenomenon among vehicles is one of the most difficult problems related to vehicle tracking. In order to handle occlusion, this paper proposes an effective solution that applied Markov Random Field (MRF) to the traffic images. The contour of the vehicle is firstly detected by using background subtraction, then numbers of blocks with vehicle's texture and motion information are filled inside each vehicle. We extract several kinds of information of each block to process the following tracking. As for each occlusive block two groups of clique functions in MRF model are defined, which represents spatial correlation and motion coherence respectively. By calculating each occlusive block's total energy function, we finally solve the attribution problem of occlusive blocks. The experimental results show that our method can handle occlusion problems effectively and track each vehicle continuously.
A small battery driven bio-patch, attached to the human body and monitoring various vital signals such as temperature, humidity, heart activity, muscle and brain activity, is an example of a highly resource constrained system, that has the demanding task to assess correctly the state of the monitored subject (healthy, normal, weak, ill, improving, worsening, etc.), and its own capabilities (attached to subject, working sensors, sufficient energy supply, etc.). These systems and many other systems would benefit from a sense of itself and its environment to improve robustness and sensibility of its behavior. Although we can get inspiration from fields like neuroscience, robotics, AI, and control theory, the tight resource and energy constraints imply that we have to understand accurately what technique leads to a particular feature of awareness, how it contributes to improved behavior, and how it can be implemented cost-efficiently in hardware or software. We review the concepts of environment- and self-models, semantic interpretation, semantic attribution, history, goals and expectations, prediction, and self-inspection, how they contribute to awareness and self-awareness, and how they contribute to improved robustness and sensibility of behavior.
The Philips audio fingerprint[1] has been used for years, but its robustness against external noise has not been studied accurately. This paper shows the Philips fingerprint is noise resistant, and is capable of recognizing music that is corrupted by noise at a -4 to -7 dB signal to noise ratio. In addition, the drawbacks of the Philips fingerprint are addressed by utilizing a “Power Mask” in conjunction with the Philips fingerprint during the matching process. This Power Mask is a weight matrix given to the fingerprint bits, which allows mismatched bits to be penalized according to their relevance in the fingerprint. The effectiveness of the proposed fingerprint was evaluated by experiments using a database of 1030 songs and 1184 query files that were heavily corrupted by two types of noise at varying levels. Our experiments show the proposed method has significantly improved the noise resistance of the standard Philips fingerprint.
This paper proposes a high-performance audio fingerprint extraction method for identifying TV commercial advertisement. In the proposed method, a salient audio peak pair fingerprints based on constant Q transform (CQT) are hashed and stored, to be efficiently compared to one another. Experimental results confirm that the proposed method is quite robust in different noise conditions and improves the accuracy of the audio fingerprinting system in real noisy environments.