Biblio
Realistic state-based discrete-event simulation models are often quite complex. The complexity frequently manifests in models that (a) contain a large number of input variables whose values are difficult to determine precisely, and (b) take a relatively long time to solve. Traditionally, models that have a large number of input variables whose values are not well-known are understood through the use of sensitivity analysis (SA) and uncertainty quantification (UQ). However, it can be prohibitively time consuming to perform SA and UQ. In this work, we present a novel approach we developed for performing fast and thorough SA and UQ on a metamodel composed of a stacked ensemble of regressors that emulates the behavior of the base model. We demonstrate the approach using a previously published botnet model as a test case, showing that the metamodel approach is several orders of magnitude faster than the base model, more accurate than existing approaches, and amenable to SA and UQ.
These days the digitization process is everywhere, spreading also across central governments and local authorities. It is hoped that, using open government data for scientific research purposes, the public good and social justice might be enhanced. Taking into account the European General Data Protection Regulation recently adopted, the big challenge in Portugal and other European countries, is how to provide the right balance between personal data privacy and data value for research. This work presents a sensitivity study of data anonymization procedure applied to a real open government data available from the Brazilian higher education evaluation system. The ARX k-anonymization algorithm, with and without generalization of some research value variables, was performed. The analysis of the amount of data / information lost and the risk of re-identification suggest that the anonymization process may lead to the under-representation of minorities and sociodemographic disadvantaged groups. It will enable scientists to improve the balance among risk, data usability, and contributions for the public good policies and practices.
Presented at the NSA Science of Security Quarterly Meeting, July 2016.
The manufacturing process of electrical machines influences the geometric dimensions and material properties, e.g. the yoke thickness. These influences occur by statistical variation as manufacturing tolerances. The effect of these tolerances and their potential impact on the mechanical torque output is not fully studied up to now. This paper conducts a sensitivity analysis for geometric and material parameters. For the general approach these parameters are varied uniformly in a range of 10 %. Two dimensional finite element analysis is used to simulate the influences at three characteristic operating points. The studied object is an internal permanent magnet machine in the 100 kW range used for hybrid drive applications. The results show a significant dependency on the rotational speed. The general validity is studied by using boundary condition variations and two further machine designs. This procedure offers the comparison of matching qualitative results for small quantitative deviations. For detecting the impact of the manufacturing process realistic tolerance ranges are used. This investigation identifies the airgap and magnet remanence induction as the main parameters for potential torque fluctuation.
Poisoning attack in which an adversary misleads the learning process by manipulating its training set significantly affect the performance of classifiers in security applications. This paper proposed a robust learning method which reduces the influences of attack samples on learning. The sensitivity, defined as the fluctuation of the output with small perturbation of the input, in Localized Generalization Error Model (L-GEM) is measured for each training sample. The classifier's output on attack samples may be sensitive and inaccurate since these samples are different from other untainted samples. An import score is assigned to each sample according to its localized generalization error bound. The classifier is trained using a new training set obtained by resampling the samples according to their importance scores. RBFNN is applied as the classifier in experimental evaluation. The proposed model outperforms than the traditional one under the well-known label flip poisoning attacks including nearest-first and farthest-first flips attack.
Security and making trust is the first step toward development in both real and virtual societies. Internet-based development is inevitable. Increasing penetration of technology in the internet banking and its effectiveness in contributing to banking profitability and prosperity requires that satisfied customers turn into loyal customers. Currently, a large number of cyber attacks have been focused on online banking systems, and these attacks are considered as a significant security threat. Banks or customers might become the victim of the most complicated financial crime, namely internet fraud. This study has developed an intelligent system that enables detecting the user's abnormal behavior in online banking. Since the user's behavior is associated with uncertainty, the system has been developed based on the fuzzy theory, This enables it to identify user behaviors and categorize suspicious behaviors with various levels of intensity. The performance of the fuzzy expert system has been evaluated using an receiver operating characteristic curve, which provides the accuracy of 94%. This expert system is optimistic to be used for improving e-banking services security and quality.
The deployment and operation of global network architectures can exhibit complex, dynamic behavior and the comprehensive validation of their properties, without actually building and running the systems, can only be achieved with the help of simulations. Packet-level models are not feasible in the Internet scale, but we are still interested in the phenomena that emerge when the systems are run in their intended environment. We argue for the high-level simulation methodology and introduce a simulation environment based on aggregate models built on state-of-the-art datasets available while respecting invariants observed in measurements. The models developed are aimed at studying a clean slate name-based interdomain routing architecture and provide an abundance of parameters for sensitivity analysis and a modular design with a balanced level of detail in different aspects of the model. In addition to introducing several reusable models for traffic, topology, and deployment, we report our experiences in using the high-level simulation approach and potential pitfalls related to it.
Application domains in which early performance evaluation is needed are becoming more complex. In addition to traditional measures of complexity due, for example, to the number of components, their interactions, complicated control coordination and schemes, emerging applications may require adaptive response and reconfiguration the impact of externally observable (security) parameters. In this paper we introduce an approach for effective modeling and analysis of performance and security tradeoffs. The approach identifies a suitable allocation of resources that meet performance requirements, while maximizing measurable security effects. We demonstrate this approach through the analysis of performance sensitivity of a Border Inspection Management System (BIMS) with changing security mechanisms (e.g. biometric system parameters for passenger identification). The final result is a model-based approach that allows us to take decisions about BIMS performance and security mechanisms on the basis of rates of traveler arrivals and traveler identification security guarantees. We describe the experience gained when applying this approach to daily flight arrival schedule of a real airport.
This paper proposes and describes an active authentication model based on user profiles built from user-issued commands when interacting with GUI-based application. Previous behavioral models derived from user issued commands were limited to analyzing the user's interaction with the *Nix (Linux or Unix) command shell program. Human-computer interaction (HCI) research has explored the idea of building users profiles based on their behavioral patterns when interacting with such graphical interfaces. It did so by analyzing the user's keystroke and/or mouse dynamics. However, none had explored the idea of creating profiles by capturing users' usage characteristics when interacting with a specific application beyond how a user strikes the keyboard or moves the mouse across the screen. We obtain and utilize a dataset of user command streams collected from working with Microsoft (MS) Word to serve as a test bed. User profiles are first built using MS Word commands and identification takes place using machine learning algorithms. Best performance in terms of both accuracy and Area under the Curve (AUC) for Receiver Operating Characteristic (ROC) curve is reported using Random Forests (RF) and AdaBoost with random forests.