Biblio
The serializability of transactions is the most important property that ensure correct processing to transactions. In case of concurrent access to the same data by several transactions, or in case of dependency relationships between running sub transactions. But some transactions has been marked as malicious and they compromise the serialization of running system. For that purpose, we propose an intrusion tolerant scheme to ensure the continuity of the running transactions. A transaction dependency graph is also used by the CDC to make decisions concerning the set of data and transactions that are threatened by a malicious activity. We will give explanations about how to use the proposed scheme to illustrate its behavior and efficiency against a compromised transaction-based in a cloud of databases environment. Several issues should be considered when dealing with the processing of a set of interleaved transactions in a transaction based environment. In most cases, these issues are due to the concurrent access to the same data by several transactions or the dependency relationship between running transactions. The serializability may be affected if a transaction that belongs to the processing node is compromised.
We consider the problem of enabling robust range estimation of eigenvalue decomposition (EVD) algorithm for a reliable fixed-point design. The simplicity of fixed-point circuitry has always been so tempting to implement EVD algorithms in fixed-point arithmetic. Working towards an effective fixed-point design, integer bit-width allocation is a significant step which has a crucial impact on accuracy and hardware efficiency. This paper investigates the shortcomings of the existing range estimation methods while deriving bounds for the variables of the EVD algorithm. In light of the circumstances, we introduce a range estimation approach based on vector and matrix norm properties together with a scaling procedure that maintains all the assets of an analytical method. The method could derive robust and tight bounds for the variables of EVD algorithm. The bounds derived using the proposed approach remain same for any input matrix and are also independent of the number of iterations or size of the problem. Some benchmark hyperspectral data sets have been used to evaluate the efficiency of the proposed technique. It was found that by the proposed range estimation approach, all the variables generated during the computation of Jacobi EVD is bounded within ±1.
This work investigates the anonymous tag cardinality estimation problem in radio frequency identification systems with frame slotted aloha-based protocol. Each tag, instead of sending its identity upon receiving the reader's request, randomly responds by only one bit in one of the time slots of the frame due to privacy and security. As a result, each slot with no response is observed as in an empty state, while the others are non-empty. Those information can be used for the tag cardinality estimation. Nevertheless, under effects of fading and noise, time slots with tags' response might be observed as empty, while those with no response might be detected as non-empty, which is known as a false detection phenomenon. The performance of conventional estimation methods is, thus, degraded because of inaccurate observations. In order to cope with this issue, we propose a new estimation algorithm using expectation-maximization method. Both the tag cardinality and a probability of false detection are iteratively estimated to maximize a likelihood function. Computer simulations will be provided to show the merit of the proposed method.
Link quality protocols employ link quality estimators to collect statistics on the wireless link either independently or cooperatively among the sensor nodes. Furthermore, link quality routing protocols for wireless sensor networks may modify an estimator to meet their needs. Link quality estimators are vulnerable against malicious attacks that can exploit them. A malicious node may share false information with its neighboring sensor nodes to affect the computations of their estimation. Consequently, malicious node may behave maliciously such that its neighbors gather incorrect statistics about their wireless links. This paper aims to detect malicious nodes that manipulate the link quality estimator of the routing protocol. In order to accomplish this task, MINTROUTE and CTP routing protocols are selected and updated with intrusion detection schemes (IDSs) for further investigations with other factors. It is proved that these two routing protocols under scrutiny possess inherent susceptibilities, that are capable of interrupting the link quality calculations. Malicious nodes that abuse such vulnerabilities can be registered through operational detection mechanisms. The overall performance of the new LQR protocol with IDSs features is experimented, validated and represented via the detection rates and false alarm rates.
This paper presents a framework for privacy-preserving video delivery system to fulfill users' privacy demands. The proposed framework leverages the inference channels in sensitive behavior prediction and object tracking in a video surveillance system for the sequence privacy protection. For such a goal, we need to capture different pieces of evidence which are used to infer the identity. The temporal, spatial and context features are extracted from the surveillance video as the observations to perceive the privacy demands and their correlations. Taking advantage of quantifying various evidence and utility, we let users subscribe videos with a viewer-dependent pattern. We implement a prototype system for off-line and on-line requirements in two typical monitoring scenarios to construct extensive experiments. The evaluation results show that our system can efficiently satisfy users' privacy demands while saving over 25% more video information compared to traditional video privacy protection schemes.
We investigate the problem of constructing exponentially converging estimates of the state of a continuous-time system from state measurements transmitted via a limited-data-rate communication channel, so that only quantized and sampled measurements of continuous signals are available to the estimator. Following prior work on topological entropy of dynamical systems, we introduce a notion of estimation entropy which captures this data rate in terms of the number of system trajectories that approximate all other trajectories with desired accuracy. We also propose a novel alternative definition of estimation entropy which uses approximating functions that are not necessarily trajectories of the system. We show that the two entropy notions are actually equivalent. We establish an upper bound for the estimation entropy in terms of the sum of the system's Lipschitz constant and the desired convergence rate, multiplied by the system dimension. We propose an iterative procedure that uses quantized and sampled state measurements to generate state estimates that converge to the true state at the desired exponential rate. The average bit rate utilized by this procedure matches the derived upper bound on the estimation entropy. We also show that no other estimator (based on iterative quantized measurements) can perform the same estimation task with bit rates lower than the estimation entropy. Finally, we develop an application of the estimation procedure in determining, from the quantized state measurements, which of two competing models of a dynamical system is the true model. We show that under a mild assumption of exponential separation of the candidate models, detection is always possible in finite time. Our numerical experiments with randomly generated affine dynamical systems suggest that in practice the algorithm always works.
Keeping a driver focused on the road is one of the most critical steps in insuring the safe operation of a vehicle. The Strategic Highway Research Program 2 (SHRP2) has over 3,100 recorded videos of volunteer drivers during a period of 2 years. This extensive naturalistic driving study (NDS) contains over one million hours of video and associated data that could aid safety researchers in understanding where the driver's attention is focused. Manual analysis of this data is infeasible; therefore efforts are underway to develop automated feature extraction algorithms to process and characterize the data. The real-world nature, volume, and acquisition conditions are unmatched in the transportation community, but there are also challenges because the data has relatively low resolution, high compression rates, and differing illumination conditions. A smaller dataset, the head pose validation study, is available which used the same recording equipment as SHRP2 but is more easily accessible with less privacy constraints. In this work we report initial head pose accuracy using commercial and open source face pose estimation algorithms on the head pose validation data set.
The amount of personal information contributed by individuals to digital repositories such as social network sites has grown substantially. The existence of this data offers unprecedented opportunities for data analytics research in various domains of societal importance including medicine and public policy. The results of these analyses can be considered a public good which benefits data contributors as well as individuals who are not making their data available. At the same time, the release of personal information carries perceived and actual privacy risks to the contributors. Our research addresses this problem area. In our work, we study a game-theoretic model in which individuals take control over participation in data analytics projects in two ways: 1) individuals can contribute data at a self-chosen level of precision, and 2) individuals can decide whether they want to contribute at all (or not). From the analyst's perspective, we investigate to which degree the research analyst has flexibility to set requirements for data precision, so that individuals are still willing to contribute to the project, and the quality of the estimation improves. We study this tradeoffs scenario for populations of homogeneous and heterogeneous individuals, and determine Nash equilibrium that reflect the optimal level of participation and precision of contributions. We further prove that the analyst can substantially increase the accuracy of the analysis by imposing a lower bound on the precision of the data that users can reveal.
This study presents spatial analysis of Dengue Fever (DF) outbreak using Geographic Information System (GIS) in the state of Selangor, Malaysia. DF is an Aedes mosquito-borne disease. The aim of the study is to map the spread of DF outbreak in Selangor by producing a risk map while the objective is to identify high risk areas of DF by producing a risk map using GIS tools. The data used was DF dengue cases in 2012 obtained from Ministry of Health, Malaysia. The analysis was carried out using Moran's I, Average Nearest Neighbor (ANN), Kernel Density Estimation (KDE) and buffer analysis using GIS. From the Moran's I analysis, the distribution pattern of DF in Selangor clustered. From the ANN analysis, the result shows a dispersed pattern where the ratio is more than 1. The third analysis was based on KDE to locate the hot spot location. The result shows that some districts are classified as high risk areas which are Ampang, Damansara, Kapar, Kajang, Klang, Semenyih, Sungai Buloh and Petaling. The buffer analysis, area ranges between 200m. to 500m. above sea level shows a clustered pattern where the highest frequent cases in the year are at the same location. It was proven that the analysis based on the spatial statistic, spatial interpolation, and buffer analysis can be used as a method in controlling and locating the DF affection with the aid of GIS.
Cloud Computing has emerged as a paradigm to deliver on demand resources to facilitate the customers with access to their infrastructure and applications as per their requirements on a subscription basis. An exponential increase in the number of cloud services in the past few years provides more options for customers to choose from. To assist customers in selecting a most trustworthy cloud provider, a unified trust evaluation framework is needed. Trust helps in the estimation of competency of a resource provider in completing a task thus enabling users to select the best resources in the heterogeneous cloud infrastructure. Trust estimates obtained using the AHP process exhibit a deviation for parameters that are not in direct proportion to the contributing attributes. Such deviation can be removed using the Fuzzy AHP model. In this paper, a Fuzzy AHP based hierarchical trust model has been proposed to rate the service providers and their various plans for infrastructure as a service.
Nowadays, Online Social Networks (OSNs) are very popular and have become an integral part of our life. People are dependent on Online Social Networks for various purposes. The activities of most of the users are normal, but a few of the users exhibit unusual and suspicious behavior. We term this suspicious and unusual behavior as malicious behavior. Malicious behavior in Online Social Networks includes a wide range of unethical activities and actions performed by individuals or communities to manipulate thought process of OSN users to fulfill their vested interest. Such malicious behavior needs to be checked and its effects should be minimized. To minimize effects of such malicious activities, we require proper detection and containment strategy. Such strategy will protect millions of users across the OSNs from misinformation and security threats. In this paper, we discuss the different studies performed in the area of malicious behavior analysis and propose a framework for detection of malicious behavior in OSNs.
A fundamental drawback of current anomaly detection systems (ADSs) is the ability of a skilled attacker to evade detection. This is due to the flawed assumption that an attacker does not have any information about an ADS. Advanced persistent threats that are capable of monitoring network behavior can always estimate some information about ADSs which makes these ADSs susceptible to evasion attacks. Hence in this paper, we first assume the role of an attacker to launch evasion attacks on anomaly detection systems. We show that the ADSs can be completely paralyzed by parameter estimation attacks. We then present a mathematical model to measure evasion margin with the aim to understand the science of evasion due to ADS design. Finally, to minimize the evasion margin, we propose a key-based randomization scheme for existing ADSs and discuss its robustness against evasion attacks. Case studies are presented to illustrate the design methodology and extensive experimentation is performed to corroborate the results.
This paper proposes a modified empirical-mode decomposition (EMD) filtering-based adaptive dynamic phasor estimation algorithm for the removal of exponentially decaying dc offset. Discrete Fourier transform does not have the ability to attain the accurate phasor of the fundamental frequency component in digital protective relays under dynamic system fault conditions because the characteristic of exponentially decaying dc offset is not consistent. EMD is a fully data-driven, not model-based, adaptive filtering procedure for extracting signal components. But the original EMD technique has high computational complexity and requires a large data series. In this paper, a short data series-based EMD filtering procedure is proposed and an optimum hermite polynomial fitting (OHPF) method is used in this modified procedure. The proposed filtering technique has high accuracy and convergent speed, and is greatly appropriate for relay applications. This paper illustrates the characteristics of the proposed technique and evaluates its performance by computer-simulated signals, PSCAD/EMTDC-generated signals, and real power system fault signals.
A novel physical layer authentication scheme is proposed in this paper by exploiting the time-varying carrier frequency offset (CFO) associated with each pair of wireless communications devices. In realistic scenarios, radio frequency oscillators in each transmitter-and-receiver pair always present device-dependent biases to the nominal oscillating frequency. The combination of these biases and mobility-induced Doppler shift, characterized as a time-varying CFO, can be used as a radiometric signature for wireless device authentication. In the proposed authentication scheme, the variable CFO values at different communication times are first estimated. Kalman filtering is then employed to predict the current value by tracking the past CFO variation, which is modeled as an autoregressive random process. To achieve the proposed authentication, the current CFO estimate is compared with the Kalman predicted CFO using hypothesis testing to determine whether the signal has followed a consistent CFO pattern. An adaptive CFO variation threshold is derived for device discrimination according to the signal-to-noise ratio and the Kalman prediction error. In addition, a software-defined radio (SDR) based prototype platform has been developed to validate the feasibility of using CFO for authentication. Simulation results further confirm the effectiveness of the proposed scheme in multipath fading channels.
Detecting stationary crowd groups and analyzing their behaviors have important applications in crowd video surveillance, but have rarely been studied. The contributions of this paper are in two aspects. First, a stationary crowd detection algorithm is proposed to estimate the stationary time of foreground pixels. It employs spatial-temporal filtering and motion filtering in order to be robust to noise caused by occlusions and crowd clutters. Second, in order to characterize the emergence and dispersal processes of stationary crowds and their behaviors during the stationary periods, three attributes are proposed for quantitative analysis. These attributes are recognized with a set of proposed crowd descriptors which extract visual features from the results of stationary crowd detection. The effectiveness of the proposed algorithms is shown through experiments on a benchmark dataset.
This paper propose a fast human detection algorithm of video surveillance in emergencies. Firstly through the background subtraction based on the single Guassian model and frame subtraction, we get the target mask which is optimized by Gaussian filter and dilation. Then the interest points of head is obtained from figures with target mask and edge detection. Finally according to detecting these pionts we can track the head and count the number of people with the frequence of moving target at the same place. Simulation results show that the algorithm can detect the moving object quickly and accurately.
To reduce human efforts in browsing long surveillance videos, synopsis videos are proposed. Traditional synopsis video generation applying optimization on video tubes is very time consuming and infeasible for real-time online generation. This dilemma significantly reduces the feasibility of synopsis video generation in practical situations. To solve this problem, the synopsis video generation problem is formulated as a maximum a posteriori probability (MAP) estimation problem in this paper, where the positions and appearing frames of video objects are chronologically rearranged in real time without the need to know their complete trajectories. Moreover, a synopsis table is employed with MAP estimation to decide the temporal locations of the incoming foreground objects in the synopsis video without needing an optimization procedure. As a result, the computational complexity of the proposed video synopsis generation method can be significantly reduced. Furthermore, as it does not require prescreening the entire video, this approach can be applied on online streaming videos.
The electric network frequency (ENF) criterion is a recently developed technique for audio timestamp identification, which involves the matching between extracted ENF signal and reference data. For nearly a decade, conventional matching criterion has been based on the minimum mean squared error (MMSE) or maximum correlation coefficient. However, the corresponding performance is highly limited by low signal-to-noise ratio, short recording durations, frequency resolution problems, and so on. This paper presents a threshold-based dynamic matching algorithm (DMA), which is capable of autocorrecting the noise affected frequency estimates. The threshold is chosen according to the frequency resolution determined by the short-time Fourier transform (STFT) window size. A penalty coefficient is introduced to monitor the autocorrection process and finally determine the estimated timestamp. It is then shown that the DMA generalizes the conventional MMSE method. By considering the mainlobe width in the STFT caused by limited frequency resolution, the DMA achieves improved identification accuracy and robustness against higher levels of noise and the offset problem. Synthetic performance analysis and practical experimental results are provided to illustrate the advantages of the DMA.
To deliver sample estimates provided with the necessary probability foundation to permit generalization from the sample data subset to the whole target population being sampled, probability sampling strategies are required to satisfy three necessary not sufficient conditions: 1) All inclusion probabilities be greater than zero in the target population to be sampled. If some sampling units have an inclusion probability of zero, then a map accuracy assessment does not represent the entire target region depicted in the map to be assessed. 2) The inclusion probabilities must be: a) knowable for nonsampled units and b) known for those units selected in the sample: since the inclusion probability determines the weight attached to each sampling unit in the accuracy estimation formulas, if the inclusion probabilities are unknown, so are the estimation weights. This original work presents a novel (to the best of these authors' knowledge, the first) probability sampling protocol for quality assessment and comparison of thematic maps generated from spaceborne/airborne very high resolution images, where: 1) an original Categorical Variable Pair Similarity Index (proposed in two different formulations) is estimated as a fuzzy degree of match between a reference and a test semantic vocabulary, which may not coincide, and 2) both symbolic pixel-based thematic quality indicators (TQIs) and sub-symbolic object-based spatial quality indicators (SQIs) are estimated with a degree of uncertainty in measurement in compliance with the well-known Quality Assurance Framework for Earth Observation (QA4EO) guidelines. Like a decision-tree, any protocol (guidelines for best practice) comprises a set of rules, equivalent to structural knowledge, and an order of presentation of the rule set, known as procedural knowledge. The combination of these two levels of knowledge makes an original protocol worth more than the sum of its parts. The several degrees of novelty of the proposed probability sampling protocol are highlighted in this paper, at the levels of understanding of both structural and procedural knowledge, in comparison with related multi-disciplinary works selected from the existing literature. In the experimental session, the proposed protocol is tested for accuracy validation of preliminary classification maps automatically generated by the Satellite Image Automatic Mapper (SIAM™) software product from two WorldView-2 images and one QuickBird-2 image provided by DigitalGlobe for testing purposes. In these experiments, collected TQIs and SQIs are statistically valid, statistically significant, consistent across maps, and in agreement with theoretical expectations, visual (qualitative) evidence and quantitative quality indexes of operativeness (OQIs) claimed for SIAM™ by related papers. As a subsidiary conclusion, the statistically consistent and statistically significant accuracy validation of the SIAM™ pre-classification maps proposed in this contribution, together with OQIs claimed for SIAM™ by related works, make the operational (automatic, accurate, near real-time, robust, scalable) SIAM™ software product eligible for opening up new inter-disciplinary research and market opportunities in accordance with the visionary goal of the Global Earth Observation System of Systems initiative and the QA4EO international guidelines.
Threat evaluation is concerned with estimating the intent, capability and opportunity of detected objects in relation to our own assets in an area of interest. To infer whether a target is threatening and to which degree is far from a trivial task. Expert operators have normally to their aid different support systems that analyze the incoming data and provide recommendations for actions. Since the ultimate responsibility lies in the operators, it is crucial that they trust and know how to configure and use these systems, as well as have a good understanding of their inner workings, strengths and limitations. To limit the negative effects of inadequate cooperation between the operators and their support systems, this paper presents a design proposal that aims at making the threat evaluation process more transparent. We focus on the initialization, configuration and preparation phases of the threat evaluation process, supporting the user in the analysis of the behavior of the system considering the relevant parameters involved in the threat estimations. For doing so, we follow a known design process model and we implement our suggestions in a proof-of-concept prototype that we evaluate with military expert system designers.
The main focus of this work is the estimation of a complex valued signal assumed to have a sparse representation in an uncountable dictionary of signals. The dictionary elements are parameterized by a real-valued vector and the available observations are corrupted with an additive noise. By applying a linearization technique, the original model is recast as a constrained sparse perturbed model. The problem of the computation of the involved multiple parameters is addressed from a nonconvex optimization viewpoint. A cost function is defined including an arbitrary Lipschitz differentiable data fidelity term accounting for the noise statistics, and an ℓ0-like penalty. A proximal algorithm is then employed to solve the resulting nonconvex and nonsmooth minimization problem. Experimental results illustrate the good practical performance of the proposed approach when applied to 2D spectrum analysis.
We consider the estimation of a scalar state based on m measurements that can be potentially manipulated by an adversary. The attacker is assumed to have full knowledge about the true value of the state to be estimated and about the value of all the measurements. However, the attacker has limited resources and can only manipulate up to l of the m measurements. The problem is formulated as a minimax optimization, where one seeks to construct an optimal estimator that minimizes the “worst-case” expected cost against all possible manipulations by the attacker. We show that if the attacker can manipulate at least half the measurements (l ≥ m/2), then the optimal worst-case estimator should ignore all measurements and be based solely on the a-priori information. We provide the explicit form of the optimal estimator when the attacker can manipulate less than half the measurements (l <; m/2), which is based on (m2l) local estimators. We further prove that such an estimator can be reduced into simpler forms for two special cases, i.e., either the estimator is symmetric and monotone or m = 2l + 1. Finally we apply the proposed methodology in the case of Gaussian measurements.
The vast majority of today's critical infrastructure is supported by numerous feedback control loops and an attack on these control loops can have disastrous consequences. This is a major concern since modern control systems are becoming large and decentralized and thus more vulnerable to attacks. This paper is concerned with the estimation and control of linear systems when some of the sensors or actuators are corrupted by an attacker. We give a new simple characterization of the maximum number of attacks that can be detected and corrected as a function of the pair (A,C) of the system and we show in particular that it is impossible to accurately reconstruct the state of a system if more than half the sensors are attacked. In addition, we show how the design of a secure local control loop can improve the resilience of the system. When the number of attacks is smaller than a threshold, we propose an efficient algorithm inspired from techniques in compressed sensing to estimate the state of the plant despite attacks. We give a theoretical characterization of the performance of this algorithm and we show on numerical simulations that the method is promising and allows to reconstruct the state accurately despite attacks. Finally, we consider the problem of designing output-feedback controllers that stabilize the system despite sensor attacks. We show that a principle of separation between estimation and control holds and that the design of resilient output feedback controllers can be reduced to the design of resilient state estimators.
This paper deals with the robust H∞ cyber-attacks estimation problem for control systems under stochastic cyber-attacks and disturbances. The focus is on designing a H∞ filter which maximize the attack sensitivity and minimize the effect of disturbances. The design requires not only the disturbance attenuation, but also the residual to remain the attack sensitivity as much as possible while the effect of disturbance is minimized. A stochastic model of control system with stochastic cyber-attacks which satisfy the Markovian stochastic process is constructed. And we also present the stochastic attack models that a control system is possibly exposed to. Furthermore, applying H∞ filtering technique-based on linear matrix inequalities (LMIs), the paper obtains sufficient conditions that ensure the filtering error dynamic is asymptotically stable and satisfies a prescribed ratio between cyber-attack sensitivity and disturbance sensitivity. Finally, the results are applied to the control of a Quadruple-tank process (QTP) under a stochastic cyber-attack and a stochastic disturbance. The simulation results underline that the designed filters is effective and feasible in practical application.