Visible to the public Biblio

Found 232 results

Filters: Keyword is Testing  [Clear All Filters]
2017-03-08
LeSaint, J., Reed, M., Popick, P..  2015.  System security engineering vulnerability assessments for mission-critical systems and functions. 2015 Annual IEEE Systems Conference (SysCon) Proceedings. :608–613.

This paper describes multiple system security engineering techniques for assessing system security vulnerabilities and discusses the application of these techniques at different system maturity points. The proposed vulnerability assessment approach allows a systems engineer to identify and assess vulnerabilities early in the life cycle and to continually increase the fidelity of the vulnerability identification and assessment as the system matures.

Sun, Z., Meng, L., Ariyaeeinia, A..  2015.  Distinguishable de-identified faces. 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG). 04:1–6.

The k-anonymity approach adopted by k-Same face de-identification methods enables these methods to serve their purpose of privacy protection. However, it also forces every k original faces to share the same de-identified face, making it impossible to track individuals in a k-Same de-identified video. To address this issue, this paper presents an approach to the creation of distinguishable de-identified faces. This new approach can serve privacy protection perfectly whilst producing de-identified faces that are as distinguishable as their original faces.

2017-02-27
Li, Z., Oechtering, T. J..  2015.  Privacy on hypothesis testing in smart grids. 2015 IEEE Information Theory Workshop - Fall (ITW). :337–341.

In this paper, we study the problem of privacy information leakage in a smart grid. The privacy risk is assumed to be caused by an unauthorized binary hypothesis testing of the consumer's behaviour based on the smart meter readings of energy supplies from the energy provider. Another energy supplies are produced by an alternative energy source. A controller equipped with an energy storage device manages the energy inflows to satisfy the energy demand of the consumer. We study the optimal energy control strategy which minimizes the asymptotic exponential decay rate of the minimum Type II error probability in the unauthorized hypothesis testing to suppress the privacy risk. Our study shows that the cardinality of the energy supplies from the energy provider for the optimal control strategy is no more than two. This result implies a simple objective of the optimal energy control strategy. When additional side information is available for the adversary, the optimal control strategy and privacy risk are compared with the case of leaking smart meter readings to the adversary only.

2017-02-21
K. Cavalleri, B. Brinkman.  2015.  "Water treatment in context: resources and African religion". 2015 Systems and Information Engineering Design Symposium. :19-23.

Drinking water availability is a crucial problem that must be addressed in order to improve the quality of life of individuals living developing nations. Improving water supply availability is important for public health, as it is the third highest risk factor for poor health in developing nations with high mortality rates. This project researched drinking water filtration for areas of Sub-Saharan Africa near existing bodies of water, where the populations are completely reliant on collecting from surface water sources: the most contaminated water source type. Water filtration methods that can be completely created by the consumer would alleviate aid organization dependence in developing nations, put the consumers in control, and improve public health. Filtration processes pass water through a medium that will catch contaminants through physical entrapment or absorption and thus yield a cleaner effluent. When exploring different materials for filtration, removal of contaminants and hydraulic conductivity are the two most important components. Not only does the method have to treat the water, but also it has to do so in a timeframe that is quick enough to produce potable water at a rate that keeps up with everyday needs. Cement is easily accessible in Sub- Saharan regions. Most concrete mixtures are not meant to be pervious, as it is a construction material used for its compressive strength, however, reduced water content in a cement mixture gives it higher permeability. Several different concrete samples of varying thicknesses and water concentrations were created. Bacterial count tests were performed on both pre-filtered and filtered water samples. Concrete filtration does remove bacteria from drinking water, however, the method can still be improved upon.

2015-05-06
Holm, H..  2014.  Signature Based Intrusion Detection for Zero-Day Attacks: (Not) A Closed Chapter? System Sciences (HICSS), 2014 47th Hawaii International Conference on. :4895-4904.

A frequent claim that has not been validated is that signature based network intrusion detection systems (SNIDS) cannot detect zero-day attacks. This paper studies this property by testing 356 severe attacks on the SNIDS Snort, configured with an old official rule set. Of these attacks, 183 attacks are zero-days' to the rule set and 173 attacks are theoretically known to it. The results from the study show that Snort clearly is able to detect zero-days' (a mean of 17% detection). The detection rate is however on overall greater for theoretically known attacks (a mean of 54% detection). The paper then investigates how the zero-days' are detected, how prone the corresponding signatures are to false alarms, and how easily they can be evaded. Analyses of these aspects suggest that a conservative estimate on zero-day detection by Snort is 8.2%.

Kannan, S., Karimi, N., Karri, R., Sinanoglu, O..  2014.  Detection, diagnosis, and repair of faults in memristor-based memories. VLSI Test Symposium (VTS), 2014 IEEE 32nd. :1-6.

Memristors are an attractive option for use in future memory architectures due to their non-volatility, high density and low power operation. Notwithstanding these advantages, memristors and memristor-based memories are prone to high defect densities due to the non-deterministic nature of nanoscale fabrication. The typical approach to fault detection and diagnosis in memories entails testing one memory cell at a time. This is time consuming and does not scale for the dense, memristor-based memories. In this paper, we integrate solutions for detecting and locating faults in memristors, and ensure post-silicon recovery from memristor failures. We propose a hybrid diagnosis scheme that exploits sneak-paths inherent in crossbar memories, and uses March testing to test and diagnose multiple memory cells simultaneously, thereby reducing test time. We also provide a repair mechanism that prevents faults in the memory from being activated. The proposed schemes enable and leverage sneak paths during fault detection and diagnosis modes, while still maintaining a sneak-path free crossbar during normal operation. The proposed hybrid scheme reduces fault detection and diagnosis time by ~44%, compared to traditional March tests, and repairs the faulty cell with minimal overhead.
 

Yier Jin, Sullivan, D..  2014.  Real-time trust evaluation in integrated circuits. Design, Automation and Test in Europe Conference and Exhibition (DATE), 2014. :1-6.

The use of side-channel measurements and fingerprinting, in conjunction with statistical analysis, has proven to be the most effective method for accurately detecting hardware Trojans in fabricated integrated circuits. However, these post-fabrication trust evaluation methods overlook the capabilities of advanced design skills that attackers can use in designing sophisticated Trojans. To this end, we have designed a Trojan using power-gating techniques and demonstrate that it can be masked from advanced side-channel fingerprinting detection while dormant. We then propose a real-time trust evaluation framework that continuously monitors the on-board global power consumption to monitor chip trustworthiness. The measurements obtained corroborate our frameworks effectiveness for detecting Trojans. Finally, the results presented are experimentally verified by performing measurements on fabricated Trojan-free and Trojan-infected variants of a reconfigurable linear feedback shift register (LFSR) array.

Ramdas, A., Saeed, S.M., Sinanoglu, O..  2014.  Slack removal for enhanced reliability and trust. Design Technology of Integrated Systems In Nanoscale Era (DTIS), 2014 9th IEEE International Conference On. :1-4.

Timing slacks possibly lead to reliability issues and/or security vulnerabilities, as they may hide small delay defects and malicious circuitries injected during fabrication, namely, hardware Trojans. While possibly harmless immediately after production, small delay defects may trigger reliability problems as the part is being used in field, presenting a significant threat for mission-critical applications. Hardware Trojans remain dormant while the part is tested and validated, but then get activated to launch an attack when the chip is deployed in security-critical applications. In this paper, we take a deeper look into these problems and their underlying reasons, and propose a design technique to maximize the detection of small delay defects as well as the hardware Trojans. The proposed technique eliminates all slacks by judiciously inserting delay units in a small set of locations in the circuit, thereby rendering a simple set of transition fault patterns quite effective in catching parts with small delay defects or Trojans. Experimental results also justify the efficacy of the proposed technique in improving the quality of test while retaining the pattern count and care bit density intact.
 

Mukaddam, A., Elhajj, I., Kayssi, A., Chehab, A..  2014.  IP Spoofing Detection Using Modified Hop Count. Advanced Information Networking and Applications (AINA), 2014 IEEE 28th International Conference on. :512-516.

With the global widespread usage of the Internet, more and more cyber-attacks are being performed. Many of these attacks utilize IP address spoofing. This paper describes IP spoofing attacks and the proposed methods currently available to detect or prevent them. In addition, it presents a statistical analysis of the Hop Count parameter used in our proposed IP spoofing detection algorithm. We propose an algorithm, inspired by the Hop Count Filtering (HCF) technique, that changes the learning phase of HCF to include all the possible available Hop Count values. Compared to the original HCF method and its variants, our proposed method increases the true positive rate by at least 9% and consequently increases the overall accuracy of an intrusion detection system by at least 9%. Our proposed method performs in general better than HCF method and its variants.
 

2015-05-05
Thompson, M., Evans, N., Kisekka, V..  2014.  Multiple OS rotational environment an implemented Moving Target Defense. Resilient Control Systems (ISRCS), 2014 7th International Symposium on. :1-6.

Cyber-attacks continue to pose a major threat to existing critical infrastructure. Although suggestions for defensive strategies abound, Moving Target Defense (MTD) has only recently gained attention as a possible solution for mitigating cyber-attacks. The current work proposes a MTD technique that provides enhanced security through a rotation of multiple operating systems. The MTD solution developed in this research utilizes existing technology to provide a feasible dynamic defense solution that can be deployed easily in a real networking environment. In addition, the system we developed was tested extensively for effectiveness using CORE Impact Pro (CORE), Nmap, and manual penetration tests. The test results showed that platform diversity and rotation offer improved security. In addition, the likelihood of a successful attack decreased proportionally with time between rotations.
 

Ferguson, B., Tall, A., Olsen, D..  2014.  National Cyber Range Overview. Military Communications Conference (MILCOM), 2014 IEEE. :123-128.

The National Cyber Range (NCR) is an innovative Department of Defense (DoD) resource originally established by the Defense Advanced Research Projects Agency (DARPA) and now under the purview of the Test Resource Management Center (TRMC). It provides a unique environment for cyber security testing throughout the program development life cycle using unique methods to assess resiliency to advanced cyberspace security threats. This paper describes what a cyber security range is, how it might be employed, and the advantages a program manager (PM) can gain in applying the results of range events. Creating realism in a test environment isolated from the operational environment is a special challenge in cyberspace. Representing the scale and diversity of the complex DoD communications networks at a fidelity detailed enough to realistically portray current and anticipated attack strategies (e.g., Malware, distributed denial of service attacks, cross-site scripting) is complex. The NCR addresses this challenge by representing an Internet-like environment by employing a multitude of virtual machines and physical hardware augmented with traffic emulation, port/protocol/service vulnerability scanning, and data capture tools. Coupled with a structured test methodology, the PM can efficiently and effectively engage with the Range to gain cyberspace resiliency insights. The NCR capability, when applied, allows the DoD to incorporate cyber security early to avoid high cost integration at the end of the development life cycle. This paper provides an overview of the resources of the NCR which may be especially helpful for DoD PMs to find the best approach for testing the cyberspace resiliency of their systems under development.
 

Aydin, A., Alkhalaf, M., Bultan, T..  2014.  Automated Test Generation from Vulnerability Signatures. Software Testing, Verification and Validation (ICST), 2014 IEEE Seventh International Conference on. :193-202.

Web applications need to validate and sanitize user inputs in order to avoid attacks such as Cross Site Scripting (XSS) and SQL Injection. Writing string manipulation code for input validation and sanitization is an error-prone process leading to many vulnerabilities in real-world web applications. Automata-based static string analysis techniques can be used to automatically compute vulnerability signatures (represented as automata) that characterize all the inputs that can exploit a vulnerability. However, there are several factors that limit the applicability of static string analysis techniques in general: 1) undesirability of static string analysis requires the use of approximations leading to false positives, 2) static string analysis tools do not handle all string operations, 3) dynamic nature of the scripting languages makes static analysis difficult. In this paper, we show that vulnerability signatures computed for deliberately insecure web applications (developed for demonstrating different types of vulnerabilities) can be used to generate test cases for other applications. Given a vulnerability signature represented as an automaton, we present algorithms for test case generation based on state, transition, and path coverage. These automatically generated test cases can be used to test applications that are not analyzable statically, and to discover attack strings that demonstrate how the vulnerabilities can be exploited.
 

Coelho Martins da Fonseca, J.C., Amorim Vieira, M.P..  2014.  A Practical Experience on the Impact of Plugins in Web Security. Reliable Distributed Systems (SRDS), 2014 IEEE 33rd International Symposium on. :21-30.

In an attempt to support customization, many web applications allow the integration of third-party server-side plugins that offer diverse functionality, but also open an additional door for security vulnerabilities. In this paper we study the use of static code analysis tools to detect vulnerabilities in the plugins of the web application. The goal is twofold: 1) to study the effectiveness of static analysis on the detection of web application plugin vulnerabilities, and 2) to understand the potential impact of those plugins in the security of the core web application. We use two static code analyzers to evaluate a large number of plugins for a widely used Content Manage-ment System. Results show that many plugins that are current-ly deployed worldwide have dangerous Cross Site Scripting and SQL Injection vulnerabilities that can be easily exploited, and that even widely used static analysis tools may present disappointing vulnerability coverage and false positive rates.

Buja, G., Bin Abd Jalil, K., Bt Hj Mohd Ali, F., Rahman, T.F.A..  2014.  Detection model for SQL injection attack: An approach for preventing a web application from the SQL injection attack. Computer Applications and Industrial Electronics (ISCAIE), 2014 IEEE Symposium on. :60-64.

Since the past 20 years the uses of web in daily life is increasing and becoming trend now. As the use of the web is increasing, the use of web application is also increasing. Apparently most of the web application exists up to today have some vulnerability that could be exploited by unauthorized person. Some of well-known web application vulnerabilities are Structured Query Language (SQL) Injection, Cross-Site Scripting (XSS) and Cross-Site Request Forgery (CSRF). By compromising with these web application vulnerabilities, the system cracker can gain information about the user and lead to the reputation of the respective organization. Usually the developers of web applications did not realize that their web applications have vulnerabilities. They only realize them when there is an attack or manipulation of their code by someone. This is normal as in a web application, there are thousands of lines of code, therefore it is not easy to detect if there are some loopholes. Nowadays as the hacking tools and hacking tutorials are easier to get, lots of new hackers are born. Even though SQL injection is very easy to protect against, there are still large numbers of the system on the internet are vulnerable to this type of attack because there will be a few subtle condition that can go undetected. Therefore, in this paper we propose a detection model for detecting and recognizing the web vulnerability which is; SQL Injection based on the defined and identified criteria. In addition, the proposed detection model will be able to generate a report regarding the vulnerability level of the web application. As the consequence, the proposed detection model should be able to decrease the possibility of the SQL Injection attack that can be launch onto the web application.

Bozic, J., Wotawa, F..  2014.  Security Testing Based on Attack Patterns. Software Testing, Verification and Validation Workshops (ICSTW), 2014 IEEE Seventh International Conference on. :4-11.

Testing for security related issues is an important task of growing interest due to the vast amount of applications and services available over the internet. In practice testing for security often is performed manually with the consequences of higher costs, and no integration of security testing with today's agile software development processes. In order to bring security testing into practice, many different approaches have been suggested including fuzz testing and model-based testing approaches. Most of these approaches rely on models of the system or the application domain. In this paper we suggest to formalize attack patterns from which test cases can be generated and even executed automatically. Hence, testing for known attacks can be easily integrated into software development processes where automated testing, e.g., for daily builds, is a requirement. The approach makes use of UML state charts. Besides discussing the approach, we illustrate the approach using a case study.

Abgrall, E., le Traon, Y., Gombault, S., Monperrus, M..  2014.  Empirical Investigation of the Web Browser Attack Surface under Cross-Site Scripting: An Urgent Need for Systematic Security Regression Testing. Software Testing, Verification and Validation Workshops (ICSTW), 2014 IEEE Seventh International Conference on. :34-41.

One of the major threats against web applications is Cross-Site Scripting (XSS). The final target of XSS attacks is the client running a particular web browser. During this last decade, several competing web browsers (IE, Netscape, Chrome, Firefox) have evolved to support new features. In this paper, we explore whether the evolution of web browsers is done using systematic security regression testing. Beginning with an analysis of their current exposure degree to XSS, we extend the empirical study to a decade of most popular web browser versions. We use XSS attack vectors as unit test cases and we propose a new method supported by a tool to address this XSS vector testing issue. The analysis on a decade releases of most popular web browsers including mobile ones shows an urgent need of XSS regression testing. We advocate the use of a shared security testing benchmark as a good practice and propose a first set of publicly available XSS vectors as a basis to ensure that security is not sacrificed when a new version is delivered.

Mewara, B., Bairwa, S., Gajrani, J., Jain, V..  2014.  Enhanced browser defense for reflected Cross-Site Scripting. Reliability, Infocom Technologies and Optimization (ICRITO) (Trends and Future Directions), 2014 3rd International Conference on. :1-6.

Cross-Site Scripting (XSS) is a common attack technique that lets attackers insert the code in the output application of web page which is referred to the web browser of visitor and then the inserted code executes automatically and steals the sensitive information. In order to prevent the users from XSS attack, many client- side solutions have been implemented; most of them being used are the filters that sanitize the malicious input. However, many of these filters do not provide prevention to the newly designed sophisticated attacks such as multiple points of injection, injection into script etc. This paper proposes and implements an approach based on encoding unfiltered reflections for detecting vulnerable web applications which can be exploited using above mentioned sophisticated attacks. Results prove that the proposed approach provides accurate higher detection rate of exploits. In addition to this, an implementation of blocking the execution of malicious scripts have contributed to XSS-Me: an open source Mozilla Firefox security extension that detects for reflected XSS vulnerabilities which can be considered as an effective solution if it is integrated inside the browser rather than being enforced as an extension.

Rocha, T.S., Souto, E..  2014.  ETSSDetector: A Tool to Automatically Detect Cross-Site Scripting Vulnerabilities. Network Computing and Applications (NCA), 2014 IEEE 13th International Symposium on. :306-309.

The inappropriate use of features intended to improve usability and interactivity of web applications has resulted in the emergence of various threats, including Cross-Site Scripting(XSS) attacks. In this work, we developed ETSS Detector, a generic and modular web vulnerability scanner that automatically analyzes web applications to find XSS vulnerabilities. ETSS Detector is able to identify and analyze all data entry points of the application and generate specific code injection tests for each one. The results shows that the correct filling of the input fields with only valid information ensures a better effectiveness of the tests, increasing the detection rate of XSS attacks.
 

Gupta, M.K., Govil, M.C., Singh, G..  2014.  Static analysis approaches to detect SQL injection and cross site scripting vulnerabilities in web applications: A survey. Recent Advances and Innovations in Engineering (ICRAIE), 2014. :1-5.

Dependence on web applications is increasing very rapidly in recent time for social communications, health problem, financial transaction and many other purposes. Unfortunately, presence of security weaknesses in web applications allows malicious user's to exploit various security vulnerabilities and become the reason of their failure. Currently, SQL Injection (SQLI) and Cross-Site Scripting (XSS) vulnerabilities are most dangerous security vulnerabilities exploited in various popular web applications i.e. eBay, Google, Facebook, Twitter etc. Research on defensive programming, vulnerability detection and attack prevention techniques has been quite intensive in the past decade. Defensive programming is a set of coding guidelines to develop secure applications. But, mostly developers do not follow security guidelines and repeat same type of programming mistakes in their code. Attack prevention techniques protect the applications from attack during their execution in actual environment. The difficulties associated with accurate detection of SQLI and XSS vulnerabilities in coding phase of software development life cycle. This paper proposes a classification of software security approaches used to develop secure software in various phase of software development life cycle. It also presents a survey of static analysis based approaches to detect SQL Injection and cross-site scripting vulnerabilities in source code of web applications. The aim of these approaches is to identify the weaknesses in source code before their exploitation in actual environment. This paper would help researchers to note down future direction for securing legacy web applications in early phases of software development life cycle.

Bande, V., Pop, S., Pitica, D..  2014.  Smart diagnose procedure for data acquisition systems inside dams. Design and Technology in Electronic Packaging (SIITME), 2014 IEEE 20th International Symposium for. :179-182.

This scientific paper reveals an intelligent system for data acquisition for dam monitoring and diagnose. This system is built around the RS485 communication standard and uses its own communication protocol [2]. The aim of the system is to monitor all signal levels inside the communication bus, respectively to detect the out of action data loggers. The diagnose test extracts the following functional parameters: supply voltage and the absolute value and common mode value for differential signals used in data transmission (denoted with “A” and “B”). Analyzing this acquired information, it's possible to find short-circuits or open-circuits across the communication bus. The measurement and signal processing functions, for flaws, are implemented inside the system's central processing unit. The next testing step is finding the out of action data loggers and is being made by trying to communicate with every data logger inside the network. The lack of any response from a data logger is interpreted as an error and using the code of the data logger's microcontroller, it is possible to find its exact position inside the dam infrastructure. The novelty of this procedure is the fact that it completely automates the diagnose procedure, which, until now, was made visually by checking every data logger.
 

2015-05-04
Umam, E.G., Sriramb, E.G..  2014.  Robust encryption algorithm based SHT in wireless sensor networks. Information Communication and Embedded Systems (ICICES), 2014 International Conference on. :1-5.

In bound applications, the locations of events reportable by a device network have to be compelled to stay anonymous. That is, unauthorized observers should be unable to notice the origin of such events by analyzing the network traffic. The authors analyze 2 forms of downsides: Communication overhead and machine load problem. During this paper, the authors give a new framework for modeling, analyzing, and evaluating obscurity in device networks. The novelty of the proposed framework is twofold: initial, it introduces the notion of "interval indistinguishability" and provides a quantitative live to model obscurity in wireless device networks; second, it maps supply obscurity to the applied mathematics downside the authors showed that the present approaches for coming up with statistically anonymous systems introduce correlation in real intervals whereas faux area unit unrelated. The authors show however mapping supply obscurity to consecutive hypothesis testing with nuisance Parameters ends up in changing the matter of exposing non-public supply data into checking out associate degree applicable knowledge transformation that removes or minimize the impact of the nuisance data victimization sturdy cryptography algorithmic rule. By doing therefore, the authors remodeled the matter of analyzing real valued sample points to binary codes, that opens the door for committal to writing theory to be incorporated into the study of anonymous networks. In existing work, unable to notice unauthorized observer in network traffic. However this work in the main supported enhances their supply obscurity against correlation check, the most goal of supply location privacy is to cover the existence of real events.

Kaghaz-Garan, S., Umbarkar, A., Doboli, A..  2014.  Joint localization and fingerprinting of sound sources for auditory scene analysis. Robotic and Sensors Environments (ROSE), 2014 IEEE International Symposium on. :49-54.

In the field of scene understanding, researchers have mainly focused on using video/images to extract different elements in a scene. The computational as well as monetary cost associated with such implementations is high. This paper proposes a low-cost system which uses sound-based techniques in order to jointly perform localization as well as fingerprinting of the sound sources. A network of embedded nodes is used to sense the sound inputs. Phase-based sound localization and Support-Vector Machine classification are used to locate and classify elements of the scene, respectively. The fusion of all this data presents a complete “picture” of the scene. The proposed concepts are applied to a vehicular-traffic case study. Experiments show that the system has a fingerprinting accuracy of up to 97.5%, localization error less than 4 degrees and scene prediction accuracy of 100%.

Hongbo Liu, Jie Yang, Sidhom, S., Yan Wang, YingYing Chen, Fan Ye.  2014.  Accurate WiFi Based Localization for Smartphones Using Peer Assistance. Mobile Computing, IEEE Transactions on. 13:2199-2214.

Highly accurate indoor localization of smartphones is critical to enable novel location based features for users and businesses. In this paper, we first conduct an empirical investigation of the suitability of WiFi localization for this purpose. We find that although reasonable accuracy can be achieved, significant errors (e.g., 6 8m) always exist. The root cause is the existence of distinct locations with similar signatures, which is a fundamental limit of pure WiFi-based methods. Inspired by high densities of smartphones in public spaces, we propose a peer assisted localization approach to eliminate such large errors. It obtains accurate acoustic ranging estimates among peer phones, then maps their locations jointly against WiFi signature map subjecting to ranging constraints. We devise techniques for fast acoustic ranging among multiple phones and build a prototype. Experiments show that it can reduce the maximum and 80-percentile errors to as small as 2m and 1m, in time no longer than the original WiFi scanning, with negligible impact on battery lifetime.

2015-05-01
do Carmo, R., Hollick, M..  2014.  Analyzing active probing for practical intrusion detection in Wireless Multihop Networks. Wireless On-demand Network Systems and Services (WONS), 2014 11th Annual Conference on. :77-80.

Practical intrusion detection in Wireless Multihop Networks (WMNs) is a hard challenge. It has been shown that an active-probing-based network intrusion detection system (AP-NIDS) is practical for WMNs. However, understanding its interworking with real networks is still an unexplored challenge. In this paper, we investigate this in practice. We identify the general functional parameters that can be controlled, and by means of extensive experimentation, we tune these parameters and analyze the trade-offs between them, aiming at reducing false positives, overhead, and detection time. The traces we collected help us to understand when and why the active probing fails, and let us present countermeasures to prevent it.

do Carmo, R., Hoffmann, J., Willert, V., Hollick, M..  2014.  Making active-probing-based network intrusion detection in Wireless Multihop Networks practical: A Bayesian inference approach to probe selection. Local Computer Networks (LCN), 2014 IEEE 39th Conference on. :345-353.

Practical intrusion detection in Wireless Multihop Networks (WMNs) is a hard challenge. The distributed nature of the network makes centralized intrusion detection difficult, while resource constraints of the nodes and the characteristics of the wireless medium often render decentralized, node-based approaches impractical. We demonstrate that an active-probing-based network intrusion detection system (AP-NIDS) is practical for WMNs. The key contribution of this paper is to optimize the active probing process: we introduce a general Bayesian model and design a probe selection algorithm that reduces the number of probes while maximizing the insights gathered by the AP-NIDS. We validate our model by means of testbed experimentation. We integrate it to our open source AP-NIDS DogoIDS and run it in an indoor wireless mesh testbed utilizing the IEEE 802.11s protocol. For the example of a selective packet dropping attack, we develop the detection states for our Bayes model, and show its feasibility. We demonstrate that our approach does not need to execute the complete set of probes, yet we obtain good detection rates.