Biblio
Root cause analysis (RCA) is a common and recurring task performed by operators of cellular networks. It is done mainly to keep customers satisfied with the quality of offered services and to maximize return on investment (ROI) by minimizing and where possible eliminating the root causes of faults in cellular networks. Currently, the actual detection and diagnosis of faults or potential faults is still a manual and slow process often carried out by network experts who manually analyze and correlate various pieces of network data such as, alarms, call traces, configuration management (CM) and key performance indicator (KPI) data in order to come up with the most probable root cause of a given network fault. In this paper, we propose an automated fault detection and diagnosis solution called adaptive root cause analysis (ARCA). The solution uses measurements and other network data together with Bayesian network theory to perform automated evidence based RCA. Compared to the current common practice, our solution is faster due to automation of the entire RCA process. The solution is also cheaper because it needs fewer or no personnel in order to operate and it improves efficiency through domain knowledge reuse during adaptive learning. As it uses a probabilistic Bayesian classifier, it can work with incomplete data and it can handle large datasets with complex probability combinations. Experimental results from stratified synthesized data affirmatively validate the feasibility of using such a solution as a key part of self-healing (SH) especially in emerging self-organizing network (SON) based solutions in LTE Advanced (LTE-A) and 5G.
Ransomware techniques have evolved over time with the most resilient attacks making data recovery practically impossible. This has driven countermeasures to shift towards recovery against prevention but in this paper, we model ransomware attacks from an infection vector point of view. We follow the basic infection chain of crypto ransomware and use Bayesian network statistics to infer some of the most common ransomware infection vectors. We also employ the use of attack and sensor nodes to capture uncertainty in the Bayesian network.
When supporting commercial or defense systems, a perennial challenge is providing effective test and diagnosis strategies to minimize downtime, thereby maximizing system availability. Potentially one of the most effective ways to maximize downtime is to be able to detect and isolate as many faults in a system at one time as possible. This is referred to as the "multiple-fault diagnosis" problem. While several tools have been developed over the years to assist in performing multiple-fault diagnosis, considerable work remains to provide the best diagnosis possible. Recently, a new model for evolutionary computation has been developed called the "Factored Evolutionary Algorithm" (FEA). In this paper, we combine our prior work in deriving diagnostic Bayesian networks from static fault isolation manuals and fault trees with the FEA strategy to perform abductive inference as a way of addressing the multiple-fault diagnosis problem. We demonstrate the effectiveness of this approach on several networks derived from existing, real-world FIMs.
Supervisory control and data acquisition (SCADA) systems are the key driver for critical infrastructures and industrial facilities. Cyber-attacks to SCADA networks may cause equipment damage or even fatalities. Identifying risks in SCADA networks is critical to ensuring the normal operation of these industrial systems. In this paper we propose a Bayesian network-based cyber-security risk assessment model to dynamically and quantitatively assess the security risk level in SCADA networks. The major distinction of our work is that the proposed risk assessment method can learn model parameters from historical data and then improve assessment accuracy by incrementally learning from online observations. Furthermore, our method is able to assess the risk caused by unknown attacks. The simulation results demonstrate that the proposed approach is effective for SCADA security risk assessment.
Ransomwares have become a growing threat since 2012, and the situation continues to worsen until now. The lack of security mechanisms and security awareness are pushing the systems into mire of ransomware attacks. In this paper, a new framework called 2entFOX' is proposed in order to detect high survivable ransomwares (HSR). To our knowledge this framework can be considered as one of the first frameworks in ransomware detection because of little publicly-available research in this field. We analyzed Windows ransomwares' behaviour and we tried to find appropriate features which are particular useful in detecting this type of malwares with high detection accuracy and low false positive rate. After hard experimental analysis we extracted 20 effective features which due to two highly efficient ones we could achieve an appropriate set for HSRs detection. After proposing architecture based on Bayesian belief network, the final evaluation is done on some known ransomware samples and unknown ones based on six different scenarios. The result of this evaluations shows the high accuracy of 2entFox in detection of HSRs.
Event discovery from single pictures is a challenging problem that has raised significant interest in the last decade. During this time, a number of interesting solutions have been proposed to tackle event discovery in still images. However, a large scale benchmarking image dataset for the evaluation and comparison of event discovery algorithms from single images is still lagging behind. To this aim, in this paper we provide a large-scale properly annotated and balanced dataset of 490,000 images, covering every aspect of 14 different types of social events, selected among the most shared ones in the social network. Such a large scale collection of event-related images is intended to become a powerful support tool for the research community in multimedia analysis by providing a common benchmark for training, testing, validation and comparison of existing and novel algorithms. In this paper, we provide a detailed description of how the dataset is collected, organized and how it can be beneficial for the researchers in the multimedia analysis domain. Moreover, a deep learning based approach is introduced into event discovery from single images as one of the possible applications of this dataset with a belief that deep learning can prove to be a breakthrough also in this research area. By providing this dataset, we hope to gather research community in the multimedia and signal processing domains to advance this application.
In most adaptive video streaming systems adaptation decisions rely solely on the available network resources. As the content of a video has a large influence on the perception of quality our belief is that this is not sufficient. Thus, we have proposed a support service for content-aware video adaptation on mobile devices: Video Adaptation Service (VAS). Based on the content of a streamed video, the adaptation process is improved by setting a target quality level for a session based on an objective video quality metric. In this work, we demonstrate VAS and its advantages of a reduced data traffic by only streaming the lowest video representation which is necessary to reach a desired quality. By leveraging the content properties of a video stream, the system is able to keep a stable video quality and at the same time reduce the network load.
TCP congestion control has been known for its crucial role in stabilizing the Internet and preventing congestion collapses. However, with the rapid advancement in networking technologies, resulting in the emergence of challenging network environments such as data center networks (DCNs), the traditional TCP algorithm leads to several impairments. The shortcomings of TCP when deployed in DCNs have motivated the development of multiple new variants, including DCTCP, ICTCP, IA-TCP, and D2TCP, but all of these algorithms exhibit their advantages at the cost of a number of drawbacks in the Global Internet. Motivated by the belief that new innovations need to be established on top of a solid foundation with a thorough understanding of the existing, well-established algorithms, we have been working towards a comprehensive analysis of various conventional TCP algorithms in DCNs and other modern networks. This paper presents our first milestone towards the completion of our comparative study in which we present the results obtained by simulating multiple TCP variants: NewReno, Vegas, HighSpeed, Scalable, Westwood+, BIC, CUBIC, and YeAH using a fat tree architecture. Each protocol is evaluated in terms of queue length, number of dropped packets, average packet delay, and aggregate bandwidth as a percentage of the channel bandwidth.
We address the problem of stabilizing control for complex queueing systems with known parameters but unobservable Markovian random environment. In such systems, the controller needs to assign servers to queues without having full information about the servers' states. A control challenge is to devise a policy that matches servers to queues in a way that takes state estimates into account. Maximally attainable stability regions are non-trivial. To handle these situations, we model the system under given decision rules. The model is using Quasi-Birth-and-Death (QBD) structure to find a matrix analytic expression for the stability bound. We use this formulation to illustrate how the stability region grows as the number of controller belief states increases.
IP tracking and cloaking are practices for identifying users which are used legitimately by websites to provide services and content tailored to particular users. However, it is believed that these practices are also used by malicious websites to avoid detection by anti-virus companies crawling the web to find malware. In addition, malicious websites are also believed to use IP tracking in order to deliver targeted malware based upon a history of previous visits by users. In this paper we empirically investigate these beliefs and collect a large dataset of suspicious URLs in order to identify at what level IP tracking takes place that is at the level of an individual address or at the level of their network provider or organisation (Network tracking). Our results illustrate that IP tracking is used in a small subset of domains within our dataset while no strong indication of network tracking was observed.
Large scale sensor networks are ubiquitous nowadays. An important objective of deploying sensors is to detect anomalies in the monitored system or infrastructure, which allows remedial measures to be taken to prevent failures, inefficiencies, and security breaches. Most existing sensor anomaly detection methods are local, i.e., they do not capture the global dependency structure of the sensors, nor do they perform well in the presence of missing or erroneous data. In this paper, we propose an anomaly detection technique for large scale sensor data that leverages relationships between sensors to improve robustness even when data is missing or erroneous. We develop a probabilistic graphical model-based global outlier detection technique that represents a sensor network as a pairwise Markov Random Field and uses graphical model inference to detect anomalies. We show our model is more robust than local models, and detects anomalies with 90% accuracy even when 50% of sensors are erroneous. We also build a synthetic graphical model generator that preserves statistical properties of a real data set to test our outlier detection technique at scale.
The speech emotion recognition accuracy of prosody feature and voice quality feature declines with the decrease of SNR (Signal to Noise Ratio) of speech signals. In this paper, we propose novel sub-band spectral centroid weighted wavelet packet cepstral coefficients (W-WPCC) for robust speech emotion recognition. The W-WPCC feature is computed by combining the sub-band energies with sub-band spectral centroids via a weighting scheme to generate noise-robust acoustic features. And Deep Belief Networks (DBNs) are artificial neural networks having more than one hidden layer, which are first pre-trained layer by layer and then fine-tuned using back propagation algorithm. The well-trained deep neural networks are capable of modeling complex and non-linear features of input training data and can better predict the probability distribution over classification labels. We extracted prosody feature, voice quality features and wavelet packet cepstral coefficients (WPCC) from the speech signals to combine with W-WPCC and fused them by Deep Belief Networks (DBNs). Experimental results on Berlin emotional speech database show that the proposed fused feature with W-WPCC is more suitable in speech emotion recognition under noisy conditions than other acoustics features and proposed DBNs feature learning structure combined with W-WPCC improve emotion recognition performance over the conventional emotion recognition method.
Heart rate monitoring has become increasingly popular in the industry through mobile phones and wearable devices. However, current determination of heart rate through mobile applications suffers from high corruption of signals during intensive physical exercise. In this paper, we present a novel technique for accurately determining heart rate during intensive motion by classifying PPG signals obtained from smartphones or wearable devices combined with motion data obtained from accelerometer sensors. Our approach utilizes the Internet of Things (IoT) cloud connectivity of smartphones for selection of PPG signals using deep learning. The technique is validated using the TROIKA dataset and is accurately able to predict heart rate with a 10-fold cross validation error margin of 4.88%.
This paper proposes a method for segmentation of nuclei of single/isolated and overlapping/touching immature white blood cells from microscopic images of B-Lineage acute lymphoblastic leukemia (ALL) prepared from peripheral blood and bone marrow aspirate. We propose deep belief network approach for the segmentation of these nuclei. Simulation results and comparison with some of the existing methods demonstrate the efficacy of the proposed method.
Abnormality detection is useful in reducing the amount of data to be processed manually by directing attention to the specific portion of data. However, selections of suitable features are important for the success of an abnormality detection system. Designing and selecting appropriate features are time-consuming, requires expensive domain knowledge and human labor. Further, it is very challenging to represent high-level concepts of abnormality in terms of raw input. Most of the existing abnormality detection system use handcrafted feature detector and are based on shallow architecture. In this work, we explore Deep Belief Network for abnormality detection and simultaneously, compared the performance of classic neural network in terms of features learned and accuracy of detecting the abnormality. Further, we explore the set of features learn by each layer of the deep architecture. We also provide a simple and fast mechanism to visualize the feature at the higher layer. Further, the effect of different activation function on abnormality detection is also compared. We observed that deep learning based approach can be used for detecting an abnormality. It has better performance compare to classical neural network in separating distinct as well as almost similar data.
Bitcoin provides two incentives for miners: block rewards and transaction fees. The former accounts for the vast majority of miner revenues at the beginning of the system, but it is expected to transition to the latter as the block rewards dwindle. There has been an implicit belief that whether miners are paid by block rewards or transaction fees does not affect the security of the block chain. We show that this is not the case. Our key insight is that with only transaction fees, the variance of the block reward is very high due to the exponentially distributed block arrival time, and it becomes attractive to fork a "wealthy" block to "steal" the rewards therein. We show that this results in an equilibrium with undesirable properties for Bitcoin's security and performance, and even non-equilibria in some circumstances. We also revisit selfish mining and show that it can be made profitable for a miner with an arbitrarily low hash power share, and who is arbitrarily poorly connected within the network. Our results are derived from theoretical analysis and confirmed by a new Bitcoin mining simulator that may be of independent interest. We discuss the troubling implications of our results for Bitcoin's future security and draw lessons for the design of new cryptocurrencies.
Recent years have seen the rise of sophisticated attacks including advanced persistent threats (APT) which pose severe risks to organizations and governments. Additionally, new malware strains appear at a higher rate than ever before. Since many of these malware evade existing security products, traditional defenses deployed by enterprises today often fail at detecting infections at an early stage. We address the problem of detecting early-stage APT infection by proposing a new framework based on belief propagation inspired from graph theory. We demonstrate that our techniques perform well on two large datasets. We achieve high accuracy on two months of DNS logs released by Los Alamos National Lab (LANL), which include APT infection attacks simulated by LANL domain experts. We also apply our algorithms to 38TB of web proxy logs collected at the border of a large enterprise and identify hundreds of malicious domains overlooked by state-of-the-art security products.
Dynamic firewalls with stateful inspection have added a lot of security features over the stateless traditional static filters. Dynamic firewalls need to be adaptive. In this paper, we have designed a framework for dynamic firewalls based on probabilistic ontology using Multi Entity Bayesian Networks (MEBN) logic. MEBN extends ordinary Bayesian networks to allow representation of graphical models with repeated substructures and can express a probability distribution over models of any consistent first order theory. The motivation of our proposed work is about preventing novel attacks (i.e. those attacks for which no signatures have been generated yet). The proposed framework is in two important parts: first part is the data flow architecture which extracts important connection based features with the prime goal of an explicit rule inclusion into the rule base of the firewall; second part is the knowledge flow architecture which uses semantic threat graph as well as reasoning under uncertainty to fulfill the required objective of providing futuristic threat prevention technique in dynamic firewalls.
This paper presents the application of fusion meth- ods to a visual surveillance scenario. The range of relevant features for re-identifying vehicles is discussed, along with the methods for fusing probabilistic estimates derived from these estimates. In particular, two statistical parametric fusion methods are considered: Bayesian Networks and the Dempster Shafer approach. The main contribution of this paper is the development of a metric to allow direct comparison of the benefits of the two methods. This is achieved by generalising the Kelly betting strategy to accommodate a variable total stake for each sample, subject to a fixed expected (mean) stake. This metric provides a method to quantify the extra information provided by the Dempster-Shafer method, in comparison to a Bayesian Fusion approach.
- « first
- ‹ previous
- 1
- 2
- 3