Visible to the public Biblio

Found 203 results

Filters: Keyword is probability  [Clear All Filters]
2017-12-20
Fihri, W. F., Ghazi, H. E., Kaabouch, N., Majd, B. A. E..  2017.  Bayesian decision model with trilateration for primary user emulation attack localization in cognitive radio networks. 2017 International Symposium on Networks, Computers and Communications (ISNCC). :1–6.

Primary user emulation (PUE) attack is one of the main threats affecting cognitive radio (CR) networks. The PUE can forge the same signal as the real primary user (PU) in order to use the licensed channel and cause deny of service (DoS). Therefore, it is important to locate the position of the PUE in order to stop and avoid any further attack. Several techniques have been proposed for localization, including the received signal strength indication RSSI, Triangulation, and Physical Network Layer Coding. However, the area surrounding the real PU is always affected by uncertainty. This uncertainty can be described as a lost (cost) function and conditional probability to be taken into consideration while proclaiming if a PU/PUE is the real PU or not. In this paper, we proposed a combination of a Bayesian model and trilateration technique. In the first part a trilateration technique is used to have a good approximation of the PUE position making use of the RSSI between the anchor nodes and the PU/PUE. In the second part, a Bayesian decision theory is used to claim the legitimacy of the PU based on the lost function and the conditional probability to help to determine the existence of the PUE attacker in the uncertainty area.

Xiang, Z., Cai, Y., Yang, W., Sun, X., Hu, Y..  2017.  Physical layer security of non-orthogonal multiple access in cognitive radio networks. 2017 9th International Conference on Wireless Communications and Signal Processing (WCSP). :1–6.

This paper investigates physical layer security of non-orthogonal multiple access (NOMA) in cognitive radio (CR) networks. The techniques of NOMA and CR have improved the spectrum efficiency greatly in the traditional networks. Because of the difference in principles of spectrum improving, NOMA and CR can be combined together, i.e. CR NOMA network, and have great potential to improving the spectrum efficiency. However the physical layer security in CR NOMA network is different from any single network of NOMA or CR. We will study the physical layer security in underlay CR NOMA network. Firstly, the wiretap network model is constructed according to the technical characteristics of NOMA and CR. In addition, new exact and asymptotic expressions of the security outage probability are derived and been confirmed by simulation. Ultimately, we have studied the effect of some critical factors on security outage probability after simulation.

2017-12-12
Kimmig, A., Memory, A., Miller, R. J., Getoor, L..  2017.  A Collective, Probabilistic Approach to Schema Mapping. 2017 IEEE 33rd International Conference on Data Engineering (ICDE). :921–932.

We propose a probabilistic approach to the problem of schema mapping. Our approach is declarative, scalable, and extensible. It builds upon recent results in both schema mapping and probabilistic reasoning and contributes novel techniques in both fields. We introduce the problem of mapping selection, that is, choosing the best mapping from a space of potential mappings, given both metadata constraints and a data example. As selection has to reason holistically about the inputs and the dependencies between the chosen mappings, we define a new schema mapping optimization problem which captures interactions between mappings. We then introduce Collective Mapping Discovery (CMD), our solution to this problem using stateof- the-art probabilistic reasoning techniques, which allows for inconsistencies and incompleteness. Using hundreds of realistic integration scenarios, we demonstrate that the accuracy of CMD is more than 33% above that of metadata-only approaches already for small data examples, and that CMD routinely finds perfect mappings even if a quarter of the data is inconsistent.

2017-03-08
Jin, Y., Zhu, H., Shi, Z., Lu, X., Sun, L..  2015.  Cryptanalysis and improvement of two RFID-OT protocols based on quadratic residues. 2015 IEEE International Conference on Communications (ICC). :7234–7239.

The ownership transfer of RFID tag means a tagged product changes control over the supply chain. Recently, Doss et al. proposed two secure RFID tag ownership transfer (RFID-OT) protocols based on quadratic residues. However, we find that they are vulnerable to the desynchronization attack. The attack is probabilistic. As the parameters in the protocols are adopted, the successful probability is 93.75%. We also show that the use of the pseudonym of the tag h(TID) and the new secret key KTID are not feasible. In order to solve these problems, we propose the improved schemes. Security analysis shows that the new protocols can resist in the desynchronization attack and other attacks. By optimizing the performance of the new protocols, it is more practical and feasible in the large-scale deployment of RFID tags.

Boykov, Y., Isack, H., Olsson, C., Ayed, I. B..  2015.  Volumetric Bias in Segmentation and Reconstruction: Secrets and Solutions. 2015 IEEE International Conference on Computer Vision (ICCV). :1769–1777.

Many standard optimization methods for segmentation and reconstruction compute ML model estimates for appearance or geometry of segments, e.g. Zhu-Yuille [23], Torr [20], Chan-Vese [6], GrabCut [18], Delong et al. [8]. We observe that the standard likelihood term in these formu-lations corresponds to a generalized probabilistic K-means energy. In learning it is well known that this energy has a strong bias to clusters of equal size [11], which we express as a penalty for KL divergence from a uniform distribution of cardinalities. However, this volumetric bias has been mostly ignored in computer vision. We demonstrate signif- icant artifacts in standard segmentation and reconstruction methods due to this bias. Moreover, we propose binary and multi-label optimization techniques that either (a) remove this bias or (b) replace it by a KL divergence term for any given target volume distribution. Our general ideas apply to continuous or discrete energy formulations in segmenta- tion, stereo, and other reconstruction problems.

2017-03-07
Gorton, D..  2015.  Modeling Fraud Prevention of Online Services Using Incident Response Trees and Value at Risk. 2015 10th International Conference on Availability, Reliability and Security. :149–158.

Authorities like the Federal Financial Institutions Examination Council in the US and the European Central Bank in Europe have stepped up their expected minimum security requirements for financial institutions, including the requirements for risk analysis. In a previous article, we introduced a visual tool and a systematic way to estimate the probability of a successful incident response process, which we called an incident response tree (IRT). In this article, we present several scenarios using the IRT which could be used in a risk analysis of online financial services concerning fraud prevention. By minimizing the problem of underreporting, we are able to calculate the conditional probabilities of prevention, detection, and response in the incident response process of a financial institution. We also introduce a quantitative model for estimating expected loss from fraud, and conditional fraud value at risk, which enables a direct comparison of risk among online banking channels in a multi-channel environment.

2017-02-27
Gonzalez-Longatt, F., Carmona-Delgado, C., Riquelme, J., Burgos, M., Rueda, J. L..  2015.  Risk-based DC security assessment for future DC-independent system operator. 2015 International Conference on Energy Economics and Environment (ICEEE). :1–8.

The use of multi-terminal HVDC to integrate wind power coming from the North Sea opens de door for a new transmission system model, the DC-Independent System Operator (DC-ISO). DC-ISO will face highly stressed and varying conditions that requires new risk assessment tools to ensure security of supply. This paper proposes a novel risk-based static security assessment methodology named risk-based DC security assessment (RB-DCSA). It combines a probabilistic approach to include uncertainties and a fuzzy inference system to quantify the systemic and individual component risk associated with operational scenarios considering uncertainties. The proposed methodology is illustrated using a multi-terminal HVDC system where the variability of wind speed at the offshore wind is included.

Orojloo, H., Azgomi, M. A..  2015.  Evaluating the complexity and impacts of attacks on cyber-physical systems. 2015 CSI Symposium on Real-Time and Embedded Systems and Technologies (RTEST). :1–8.

In this paper, a new method for quantitative evaluation of the security of cyber-physical systems (CPSs) is proposed. The proposed method models the different classes of adversarial attacks against CPSs, including cross-domain attacks, i.e., cyber-to-cyber and cyber-to-physical attacks. It also takes the secondary consequences of attacks on CPSs into consideration. The intrusion process of attackers has been modeled using attack graph and the consequence estimation process of the attack has been investigated using process model. The security attributes and the special parameters involved in the security analysis of CPSs, have been identified and considered. The quantitative evaluation has been done using the probability of attacks, time-to-shutdown of the system and security risks. The validation phase of the proposed model is performed as a case study by applying it to a boiling water power plant and estimating the suitable security measures.

2017-02-21
Zhao Yijiu, Long Ling, Zhuang Xiaoyan, Dai Zhijian.  2015.  "Model calibration for compressive sampling system with non-ideal lowpass filter". 2015 12th IEEE International Conference on Electronic Measurement Instruments (ICEMI). 02:808-812.

This paper presents a model calibration algorithm for the modulated wideband converter (MWC) with non-ideal analog lowpass filter (LPF). The presented technique uses a test signal to estimate the finite impulse response (FIR) of the practical non-ideal LPF, and then a digital compensation filter is designed to calibrate the approximated FIR filter in the digital domain. At the cost of a moderate oversampling rate, the calibrated filter performs as an ideal LPF. The calibrated model uses the MWC system with non-ideal LPF to capture the samples of underlying signal, and then the samples are filtered by the digital compensation filter. Experimental results indicate that, without making any changes to the architecture of MWC, the proposed algorithm can obtain the samples as that of standard MWC with ideal LPF, and the signal can be reconstructed with overwhelming probability.

2017-02-14
M. Grottke, A. Avritzer, D. S. Menasché, J. Alonso, L. Aguiar, S. G. Alvarez.  2015.  "WAP: Models and metrics for the assessment of critical-infrastructure-targeted malware campaigns". 2015 IEEE 26th International Symposium on Software Reliability Engineering (ISSRE). :330-335.

Ensuring system survivability in the wake of advanced persistent threats is a big challenge that the security community is facing to ensure critical infrastructure protection. In this paper, we define metrics and models for the assessment of coordinated massive malware campaigns targeting critical infrastructure sectors. First, we develop an analytical model that allows us to capture the effect of neighborhood on different metrics (infection probability and contagion probability). Then, we assess the impact of putting operational but possibly infected nodes into quarantine. Finally, we study the implications of scanning nodes for early detection of malware (e.g., worms), accounting for false positives and false negatives. Evaluating our methodology using a small four-node topology, we find that malware infections can be effectively contained by using quarantine and appropriate rates of scanning for soft impacts.

C. H. Hsieh, C. M. Lai, C. H. Mao, T. C. Kao, K. C. Lee.  2015.  "AD2: Anomaly detection on active directory log data for insider threat monitoring". 2015 International Carnahan Conference on Security Technology (ICCST). :287-292.

What you see is not definitely believable is not a rare case in the cyber security monitoring. However, due to various tricks of camouflages, such as packing or virutal private network (VPN), detecting "advanced persistent threat"(APT) by only signature based malware detection system becomes more and more intractable. On the other hand, by carefully modeling users' subsequent behaviors of daily routines, probability for one account to generate certain operations can be estimated and used in anomaly detection. To the best of our knowledge so far, a novel behavioral analytic framework, which is dedicated to analyze Active Directory domain service logs and to monitor potential inside threat, is now first proposed in this project. Experiments on real dataset not only show that the proposed idea indeed explores a new feasible direction for cyber security monitoring, but also gives a guideline on how to deploy this framework to various environments.

2015-05-06
Bi Hong, Wan Choi.  2014.  Asymptotic analysis of failed recovery probability in a distributed wireless storage system with limited sum storage capacity. Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on. :6459-6463.

In distributed wireless storage systems, failed recovery probability depends on not only wireless channel conditions but also storage size of each distributed storage node. For efficient utilization of limited storage capacity, we asymptotically analyze the failed recovery probability of a distributed wireless storage system with a sum storage capacity constraint when signal-to-noise ratio goes to infinity, and find the optimal storage allocation strategy across distributed storage nodes in terms of the asymptotic failed recovery probability. It is also shown that when the number of storage nodes is sufficiently large the storage size required at each node is not so large for high exponential order of the failed recovery probability.

Cook, A., Wunderlich, H.-J..  2014.  Diagnosis of multiple faults with highly compacted test responses. Test Symposium (ETS), 2014 19th IEEE European. :1-6.

Defects cluster, and the probability of a multiple fault is significantly higher than just the product of the single fault probabilities. While this observation is beneficial for high yield, it complicates fault diagnosis. Multiple faults will occur especially often during process learning, yield ramp-up and field return analysis. In this paper, a logic diagnosis algorithm is presented which is robust against multiple faults and which is able to diagnose multiple faults with high accuracy even on compressed test responses as they are produced in embedded test and built-in self-test. The developed solution takes advantage of the linear properties of a MISR compactor to identify a set of faults likely to produce the observed faulty signatures. Experimental results show an improvement in accuracy of up to 22 % over traditional logic diagnosis solutions suitable for comparable compaction ratios.

Kishore, N., Kapoor, B..  2014.  An efficient parallel algorithm for hash computation in security and forensics applications. Advance Computing Conference (IACC), 2014 IEEE International. :873-877.

Hashing algorithms are used extensively in information security and digital forensics applications. This paper presents an efficient parallel algorithm hash computation. It's a modification of the SHA-1 algorithm for faster parallel implementation in applications such as the digital signature and data preservation in digital forensics. The algorithm implements recursive hash to break the chain dependencies of the standard hash function. We discuss the theoretical foundation for the work including the collision probability and the performance implications. The algorithm is implemented using the OpenMP API and experiments performed using machines with multicore processors. The results show a performance gain by more than a factor of 3 when running on the 8-core configuration of the machine.

Kishore, N., Kapoor, B..  2014.  An efficient parallel algorithm for hash computation in security and forensics applications. Advance Computing Conference (IACC), 2014 IEEE International. :873-877.


Hashing algorithms are used extensively in information security and digital forensics applications. This paper presents an efficient parallel algorithm hash computation. It's a modification of the SHA-1 algorithm for faster parallel implementation in applications such as the digital signature and data preservation in digital forensics. The algorithm implements recursive hash to break the chain dependencies of the standard hash function. We discuss the theoretical foundation for the work including the collision probability and the performance implications. The algorithm is implemented using the OpenMP API and experiments performed using machines with multicore processors. The results show a performance gain by more than a factor of 3 when running on the 8-core configuration of the machine.
 

Carter, K.M., Idika, N., Streilein, W.W..  2014.  Probabilistic Threat Propagation for Network Security. Information Forensics and Security, IEEE Transactions on. 9:1394-1405.

Techniques for network security analysis have historically focused on the actions of the network hosts. Outside of forensic analysis, little has been done to detect or predict malicious or infected nodes strictly based on their association with other known malicious nodes. This methodology is highly prevalent in the graph analytics world, however, and is referred to as community detection. In this paper, we present a method for detecting malicious and infected nodes on both monitored networks and the external Internet. We leverage prior community detection and graphical modeling work by propagating threat probabilities across network nodes, given an initial set of known malicious nodes. We enhance prior work by employing constraints that remove the adverse effect of cyclic propagation that is a byproduct of current methods. We demonstrate the effectiveness of probabilistic threat propagation on the tasks of detecting botnets and malicious web destinations.

2015-05-05
Marttinen, A., Wyglinski, A.M., Jantti, R..  2014.  Moving-target defense mechanisms against source-selective jamming attacks in tactical cognitive radio MANETs. Communications and Network Security (CNS), 2014 IEEE Conference on. :14-20.

In this paper, we propose techniques for combating source selective jamming attacks in tactical cognitive MANETs. Secure, reliable and seamless communications are important for facilitating tactical operations. Selective jamming attacks pose a serious security threat to the operations of wireless tactical MANETs since selective strategies possess the potential to completely isolate a portion of the network from other nodes without giving a clear indication of a problem. Our proposed mitigation techniques use the concept of address manipulation, which differ from other techniques presented in open literature since our techniques employ de-central architecture rather than a centralized framework and our proposed techniques do not require any extra overhead. Experimental results show that the proposed techniques enable communications in the presence of source selective jamming attacks. When the presence of a source selective jammer blocks transmissions completely, implementing a proposed flipped address mechanism increases the expected number of required transmission attempts only by one in such scenario. The probability that our second approach, random address assignment, fails to solve the correct source MAC address can be as small as 10-7 when using accurate parameter selection.

Wei Peng, Feng Li, Chin-Tser Huang, Xukai Zou.  2014.  A moving-target defense strategy for Cloud-based services with heterogeneous and dynamic attack surfaces. Communications (ICC), 2014 IEEE International Conference on. :804-809.

Due to deep automation, the configuration of many Cloud infrastructures is static and homogeneous, which, while easing administration, significantly decreases a potential attacker's uncertainty on a deployed Cloud-based service and hence increases the chance of the service being compromised. Moving-target defense (MTD) is a promising solution to the configuration staticity and homogeneity problem. This paper presents our findings on whether and to what extent MTD is effective in protecting a Cloud-based service with heterogeneous and dynamic attack surfaces - these attributes, which match the reality of current Cloud infrastructures, have not been investigated together in previous works on MTD in general network settings. We 1) formulate a Cloud-based service security model that incorporates Cloud-specific features such as VM migration/snapshotting and the diversity/compatibility of migration, 2) consider the accumulative effect of the attacker's intelligence on the target service's attack surface, 3) model the heterogeneity and dynamics of the service's attack surfaces, as defined by the (dynamic) probability of the service being compromised, as an S-shaped generalized logistic function, and 4) propose a probabilistic MTD service deployment strategy that exploits the dynamics and heterogeneity of attack surfaces for protecting the service against attackers. Through simulation, we identify the conditions and extent of the proposed MTD strategy's effectiveness in protecting Cloud-based services. Namely, 1) MTD is more effective when the service deployment is dense in the replacement pool and/or when the attack is strong, and 2) attack-surface heterogeneity-and-dynamics awareness helps in improving MTD's effectiveness.

Carroll, T.E., Crouse, M., Fulp, E.W., Berenhaut, K.S..  2014.  Analysis of network address shuffling as a moving target defense. Communications (ICC), 2014 IEEE International Conference on. :701-706.

Address shuffling is a type of moving target defense that prevents an attacker from reliably contacting a system by periodically remapping network addresses. Although limited testing has demonstrated it to be effective, little research has been conducted to examine the theoretical limits of address shuffling. As a result, it is difficult to understand how effective shuffling is and under what circumstances it is a viable moving target defense. This paper introduces probabilistic models that can provide insight into the performance of address shuffling. These models quantify the probability of attacker success in terms of network size, quantity of addresses scanned, quantity of vulnerable systems, and the frequency of shuffling. Theoretical analysis shows that shuffling is an acceptable defense if there is a small population of vulnerable systems within a large network address space, however shuffling has a cost for legitimate users. These results will also be shown empirically using simulation and actual traffic traces.
 

Juzi Zhao, Subramaniam, S., Brandt-Pearce, M..  2014.  Intradomain and interdomain QoT-aware RWA for translucent optical networks. Optical Communications and Networking, IEEE/OSA Journal of. 6:536-548.

Physical impairments in long-haul optical networks mandate that optical signals be regenerated within the (so-called translucent) network. Being expensive devices, regenerators are expected to be allocated sparsely and must be judiciously utilized. Next-generation optical-transport networks will include multiple domains with diverse technologies, protocols, granularities, and carriers. Because of confidentiality and scalability concerns, the scope of network-state information (e.g., topology, wavelength availability) may be limited to within a domain. In such networks, the problem of routing and wavelength assignment (RWA) aims to find an adequate route and wavelength(s) for lightpaths carrying end-to-end service demands. Some state information may have to be explicitly exchanged among the domains to facilitate the RWA process. The challenge is to determine which information is the most critical and make a wise choice for the path and wavelength(s) using the limited information. Recently, a framework for multidomain path computation called backward-recursive path-computation (BRPC) was standardized by the Internet Engineering Task Force. In this paper, we consider the RWA problem for connections within a single domain and interdomain connections so that the quality of transmission (QoT) requirement of each connection is satisfied, and the network-level performance metric of blocking probability is minimized. Cross-layer heuristics that are based on dynamic programming to effectively allocate the sparse regenerators are developed, and extensive simulation results are presented to demonstrate their effectiveness.

 

2015-05-04
Ya Zhang, Yi Wei, Jianbiao Ren.  2014.  Multi-touch Attribution in Online Advertising with Survival Theory. Data Mining (ICDM), 2014 IEEE International Conference on. :687-696.

Multi-touch attribution, which allows distributing the credit to all related advertisements based on their corresponding contributions, has recently become an important research topic in digital advertising. Traditionally, rule-based attribution models have been used in practice. The drawback of such rule-based models lies in the fact that the rules are not derived form the data but only based on simple intuition. With the ever enhanced capability to tracking advertisement and users' interaction with the advertisement, data-driven multi-touch attribution models, which attempt to infer the contribution from user interaction data, become an important research direction. We here propose a new data-driven attribution model based on survival theory. By adopting a probabilistic framework, one key advantage of the proposed model is that it is able to remove the presentation biases inherit to most of the other attribution models. In addition to model the attribution, the proposed model is also able to predict user's 'conversion' probability. We validate the proposed method with a real-world data set obtained from a operational commercial advertising monitoring company. Experiment results have shown that the proposed method is quite promising in both conversion prediction and attribution.

Jagdale, B.N., Bakal, J.W..  2014.  Synergetic cloaking technique in wireless network for location privacy. Industrial and Information Systems (ICIIS), 2014 9th International Conference on. :1-6.

Mobile users access location services from a location based server. While doing so, the user's privacy is at risk. The server has access to all details about the user. Example the recently visited places, the type of information he accesses. We have presented synergetic technique to safeguard location privacy of users accessing location-based services via mobile devices. Mobile devices have a capability to form ad-hoc networks to hide a user's identity and position. The user who requires the service is the query originator and who requests the service on behalf of query originator is the query sender. The query originator selects the query sender with equal probability which leads to anonymity in the network. The location revealed to the location service provider is a rectangle instead of exact co-ordinate. In this paper we have simulated the mobile network and shown the results for cloaking area sizes and performance against the variation in the density of users.

2015-05-01
Wang, S., Orwell, J., Hunter, G..  2014.  Evaluation of Bayesian and Dempster-Shafer approaches to fusion of video surveillance information. Information Fusion (FUSION), 2014 17th International Conference on. :1-7.

This paper presents the application of fusion meth- ods to a visual surveillance scenario. The range of relevant features for re-identifying vehicles is discussed, along with the methods for fusing probabilistic estimates derived from these estimates. In particular, two statistical parametric fusion methods are considered: Bayesian Networks and the Dempster Shafer approach. The main contribution of this paper is the development of a metric to allow direct comparison of the benefits of the two methods. This is achieved by generalising the Kelly betting strategy to accommodate a variable total stake for each sample, subject to a fixed expected (mean) stake. This metric provides a method to quantify the extra information provided by the Dempster-Shafer method, in comparison to a Bayesian Fusion approach. 

Baraldi, A., Boschetti, L., Humber, M.L..  2014.  Probability Sampling Protocol for Thematic and Spatial Quality Assessment of Classification Maps Generated From Spaceborne/Airborne Very High Resolution Images. Geoscience and Remote Sensing, IEEE Transactions on. 52:701-760.

To deliver sample estimates provided with the necessary probability foundation to permit generalization from the sample data subset to the whole target population being sampled, probability sampling strategies are required to satisfy three necessary not sufficient conditions: 1) All inclusion probabilities be greater than zero in the target population to be sampled. If some sampling units have an inclusion probability of zero, then a map accuracy assessment does not represent the entire target region depicted in the map to be assessed. 2) The inclusion probabilities must be: a) knowable for nonsampled units and b) known for those units selected in the sample: since the inclusion probability determines the weight attached to each sampling unit in the accuracy estimation formulas, if the inclusion probabilities are unknown, so are the estimation weights. This original work presents a novel (to the best of these authors' knowledge, the first) probability sampling protocol for quality assessment and comparison of thematic maps generated from spaceborne/airborne very high resolution images, where: 1) an original Categorical Variable Pair Similarity Index (proposed in two different formulations) is estimated as a fuzzy degree of match between a reference and a test semantic vocabulary, which may not coincide, and 2) both symbolic pixel-based thematic quality indicators (TQIs) and sub-symbolic object-based spatial quality indicators (SQIs) are estimated with a degree of uncertainty in measurement in compliance with the well-known Quality Assurance Framework for Earth Observation (QA4EO) guidelines. Like a decision-tree, any protocol (guidelines for best practice) comprises a set of rules, equivalent to structural knowledge, and an order of presentation of the rule set, known as procedural knowledge. The combination of these two levels of knowledge makes an original protocol worth more than the sum of its parts. The several degrees of novelty of the proposed probability sampling protocol are highlighted in this paper, at the levels of understanding of both structural and procedural knowledge, in comparison with related multi-disciplinary works selected from the existing literature. In the experimental session, the proposed protocol is tested for accuracy validation of preliminary classification maps automatically generated by the Satellite Image Automatic Mapper (SIAM™) software product from two WorldView-2 images and one QuickBird-2 image provided by DigitalGlobe for testing purposes. In these experiments, collected TQIs and SQIs are statistically valid, statistically significant, consistent across maps, and in agreement with theoretical expectations, visual (qualitative) evidence and quantitative quality indexes of operativeness (OQIs) claimed for SIAM™ by related papers. As a subsidiary conclusion, the statistically consistent and statistically significant accuracy validation of the SIAM™ pre-classification maps proposed in this contribution, together with OQIs claimed for SIAM™ by related works, make the operational (automatic, accurate, near real-time, robust, scalable) SIAM™ software product eligible for opening up new inter-disciplinary research and market opportunities in accordance with the visionary goal of the Global Earth Observation System of Systems initiative and the QA4EO international guidelines.

2015-04-30
Hemalatha, A., Venkatesh, R..  2014.  Redundancy management in heterogeneous wireless sensor networks. Communications and Signal Processing (ICCSP), 2014 International Conference on. :1849-1853.

A Wireless sensor network is a special type of Ad Hoc network, composed of a large number of sensor nodes spread over a wide geographical area. Each sensor node has the wireless communication capability and sufficient intelligence for making signal processing and dissemination of data from the collecting center .In this paper deals about redundancy management for improving network efficiency and query reliability in heterogeneous wireless sensor networks. The proposed scheme deals about finding a reliable path by using redundancy management algorithm and detection of unreliable nodes by discarding the path. The redundancy management algorithm finds the reliable path based on redundancy level, average distance between a source node and destination node and analyzes the redundancy level as the path and source redundancy. For finding the path from source CH to processing center we propose intrusion tolerance in the presence of unreliable nodes. Finally we applied our analyzed result to redundancy management algorithm to find the reliable path in which the network efficiency and Query success probability will be improved.