Visible to the public Biblio

Filters: Keyword is machine learning  [Clear All Filters]
2019-09-13
Madni, Azad, Madni, Carla.  2018.  Architectural Framework for Exploring Adaptive Human-Machine Teaming Options in Simulated Dynamic Environments. Systems. 6:44.

With the growing complexity of environments in which systems are expected to operate, adaptive human-machine teaming (HMT) has emerged as a key area of research. While human teams have been extensively studied in the psychological and training literature, and agent teams have been investigated in the artificial intelligence research community, the commitment to research in HMT is relatively new and fueled by several technological advances such as electrophysiological sensors, cognitive modeling, machine learning, and adaptive/adaptable human-machine systems. This paper presents an architectural framework for investigating HMT options in various simulated operational contexts including responding to systemic failures and external disruptions. The paper specifically discusses new and novel roles for machines made possible by new technology and offers key insights into adaptive human-machine teams. Landed aircraft perimeter security is used as an illustrative example of an adaptive cyber-physical-human system (CPHS). This example is used to illuminate the use of the HMT framework in identifying the different human and machine roles involved in this scenario. The framework is domain-independent and can be applied to both defense and civilian adaptive HMT. The paper concludes with recommendations for advancing the state-of-the-art in HMT. 

2019-09-12
Patricia L. McDermott, Cynthia O. Dominguez, Nicholas Kasdaglis, Matthew H. Ryan, Isabel M. Trahan, Alexander Nelson.  2018.  Human-Machine Teaming Systems Engineering Guide.

With the explosion of Automation, Autonomy, and AI technology development today, amid encouragement to put humans at the center of AI, systems engineers and user story/requirements developers need research-based guidance on how to design for human machine teaming (HMT). Insights from more than two decades of human-automation interaction research, applied in the systems engineering process, provide building blocks for designing automation, autonomy, and AI-based systems that are effective teammates for people.

The HMT Systems Engineering Guide provides this guidance based on a 2016-17 literature search and analysis of applied research. The guide provides a framework organizing HMT research, along with methodology for engaging with users of a system to elicit user stories and/or requirements that reflect applied research findings. The framework uses organizing themes of Observability, Predictability, Directing Attention, Exploring the Solution Space, Directability, Adaptability, Common Ground, Calibrated Trust, Design Process, and Information Presentation.

The guide includes practice-oriented resources that can be used to bridge the gap between research and design, including a tailorable HMT Knowledge Audit interview methodology, step-by-step instructions for planning and conducting data collection sessions, and a set of general cognitive interface requirements that can be adapted to specific applications based upon domain-specific data collected. 

2019-09-10
[Anonymous].  2019.  From viruses to social bots, researchers unearth the structure of attacked networks. Science Daily.

A machine learning model of the protein interaction network has been developed by researchers to explore how viruses operate. This research can be applied to different types of attacks and network models across different fields, including network security. The capacity to determine how trolls and bots influence users on social media platforms has also been explored through this research.

2018-08-06
B. Biggio, g. fumera, P. Russu, L. Didaci, F. Roli.  2015.  Adversarial Biometric Recognition : A review on biometric system security from the adversarial machine-learning perspective. IEEE Signal Processing Magazine. 32:31-41.

In this article, we review previous work on biometric security under a recent framework proposed in the field of adversarial machine learning. This allows us to highlight novel insights on the security of biometric systems when operating in the presence of intelligent and adaptive attackers that manipulate data to compromise normal system operation. We show how this framework enables the categorization of known and novel vulnerabilities of biometric recognition systems, along with the corresponding attacks, countermeasures, and defense mechanisms. We report two application examples, respectively showing how to fabricate a more effective face spoofing attack, and how to counter an attack that exploits an unknown vulnerability of an adaptive face-recognition system to compromise its face templates.

N. D. Truong, J. Y. Haw, S. M. Assad, P. K. Lam, O. Kavehei.  2019.  Machine Learning Cryptanalysis of a Quantum Random Number Generator. IEEE Transactions on Information Forensics and Security. 14:403-414.
Random number generators (RNGs) that are crucial for cryptographic applications have been the subject of adversarial attacks. These attacks exploit environmental information to predict generated random numbers that are supposed to be truly random and unpredictable. Though quantum random number generators (QRNGs) are based on the intrinsic indeterministic nature of quantum properties, the presence of classical noise in the measurement process compromises the integrity of a QRNG. In this paper, we develop a predictive machine learning (ML) analysis to investigate the impact of deterministic classical noise in different stages of an optical continuous variable QRNG. Our ML model successfully detects inherent correlations when the deterministic noise sources are prominent. After appropriate filtering and randomness extraction processes are introduced, our QRNG system, in turn, demonstrates its robustness against ML. We further demonstrate the robustness of our ML approach by applying it to uniformly distributed random numbers from the QRNG and a congruential RNG. Hence, our result shows that ML has potentials in benchmarking the quality of RNG devices.
Z. Abaid, M. A. Kaafar, S. Jha.  2017.  Quantifying the impact of adversarial evasion attacks on machine learning based android malware classifiers. 2017 IEEE 16th International Symposium on Network Computing and Applications (NCA). :1-10.
With the proliferation of Android-based devices, malicious apps have increasingly found their way to user devices. Many solutions for Android malware detection rely on machine learning; although effective, these are vulnerable to attacks from adversaries who wish to subvert these algorithms and allow malicious apps to evade detection. In this work, we present a statistical analysis of the impact of adversarial evasion attacks on various linear and non-linear classifiers, using a recently proposed Android malware classifier as a case study. We systematically explore the complete space of possible attacks varying in the adversary's knowledge about the classifier; our results show that it is possible to subvert linear classifiers (Support Vector Machines and Logistic Regression) by perturbing only a few features of malicious apps, with more knowledgeable adversaries degrading the classifier's detection rate from 100% to 0% and a completely blind adversary able to lower it to 12%. We show non-linear classifiers (Random Forest and Neural Network) to be more resilient to these attacks. We conclude our study with recommendations for designing classifiers to be more robust to the attacks presented in our work.