Visible to the public Biblio

Found 212 results

2019-09-06
Lily Hay Newman.  2019.  To Fight Deepfakes, Researchers Built a Smarter Camera. Wired.

This article pertains to cognitive security. Detecting manipulated photos, or "deepfakes," can be difficult. Deepfakes have become a major concern as their use in disinformation campaigns, social media manipulation, and propaganda grows worldwide.

Lily Hay Newman.  2019.  Facebook Removes a Fresh Batch of Innovative, Iran-Linked Fake Accounts. Wired.

This article pertains to cognitive security and human behavior. Facebook announced a recent takedown of 51 Facebook accounts, 36 Facebook pages, seven Facebook groups and three Instagram accounts that it says were all involved in coordinated "inauthentic behavior." Facebook says the activity originated geographically from Iran.

Pawel Korus, Nasir Memon.  2019.  Outsmarting deep fakes: AI-driven imaging system protects authenticity. Science Daily.

Researchers at the NYU Tandon School of Engineering developed a technique to prevent sophisticated methods of altering photos and videos to produce deep fakes, which are often weaponized to influence people. The technique developed by researchers involves the use of artificial intelligence (AI) to determine the authenticity of images and videos.

2019-09-05
Jessica Barber.  2016.  How to Hack a Human.

Dr. Jessica Barker gave a presentation in which she discussed the elements of human nature and social norms that lead humans to falling victim to social engineering attacks. The importance of strengthening cybersecurity culture in the workplace to encourage good cybersecurity behaviors is also emphasized.

[Anonymous].  2018.  The Human Factor: People-Centered Threats Define the Landscape.

Proofpoint's 2018 report, titled The Human Factor: People-Centered Threats Define the Landscape, highlights the increased use of social engineering by attackers over automated exploits. Cyber criminals are increasingly depending on human interaction to launch attacks. Human instincts such as curiosity and trust are often abused to lead unsuspecting users into revealing sensitive information and giving attackers access to systems.

2018-08-06
B. Biggio, g. fumera, P. Russu, L. Didaci, F. Roli.  2015.  Adversarial Biometric Recognition : A review on biometric system security from the adversarial machine-learning perspective. IEEE Signal Processing Magazine. 32:31-41.

In this article, we review previous work on biometric security under a recent framework proposed in the field of adversarial machine learning. This allows us to highlight novel insights on the security of biometric systems when operating in the presence of intelligent and adaptive attackers that manipulate data to compromise normal system operation. We show how this framework enables the categorization of known and novel vulnerabilities of biometric recognition systems, along with the corresponding attacks, countermeasures, and defense mechanisms. We report two application examples, respectively showing how to fabricate a more effective face spoofing attack, and how to counter an attack that exploits an unknown vulnerability of an adaptive face-recognition system to compromise its face templates.

Khan, Saad, Parkinson, Simon.  2017.  Causal Connections Mining Within Security Event Logs. Proceedings of the Knowledge Capture Conference. :38:1–38:4.
Kumar, Rajesh, Xiaosong, Zhang, Khan, Riaz Ullah, Kumar, Jay, Ahad, Ijaz.  2018.  Effective and Explainable Detection of Android Malware Based on Machine Learning Algorithms. Proceedings of the 2018 International Conference on Computing and Artificial Intelligence. :35–40.
N. D. Truong, J. Y. Haw, S. M. Assad, P. K. Lam, O. Kavehei.  2019.  Machine Learning Cryptanalysis of a Quantum Random Number Generator. IEEE Transactions on Information Forensics and Security. 14:403-414.
Random number generators (RNGs) that are crucial for cryptographic applications have been the subject of adversarial attacks. These attacks exploit environmental information to predict generated random numbers that are supposed to be truly random and unpredictable. Though quantum random number generators (QRNGs) are based on the intrinsic indeterministic nature of quantum properties, the presence of classical noise in the measurement process compromises the integrity of a QRNG. In this paper, we develop a predictive machine learning (ML) analysis to investigate the impact of deterministic classical noise in different stages of an optical continuous variable QRNG. Our ML model successfully detects inherent correlations when the deterministic noise sources are prominent. After appropriate filtering and randomness extraction processes are introduced, our QRNG system, in turn, demonstrates its robustness against ML. We further demonstrate the robustness of our ML approach by applying it to uniformly distributed random numbers from the QRNG and a congruential RNG. Hence, our result shows that ML has potentials in benchmarking the quality of RNG devices.
Y. Cao, J. Yang.  2015.  Towards Making Systems Forget with Machine Unlearning. 2015 IEEE Symposium on Security and Privacy. :463-480.
Today's systems produce a rapidly exploding amount of data, and the data further derives more data, forming a complex data propagation network that we call the data's lineage. There are many reasons that users want systems to forget certain data including its lineage. From a privacy perspective, users who become concerned with new privacy risks of a system often want the system to forget their data and lineage. From a security perspective, if an attacker pollutes an anomaly detector by injecting manually crafted data into the training data set, the detector must forget the injected data to regain security. From a usability perspective, a user can remove noise and incorrect entries so that a recommendation engine gives useful recommendations. Therefore, we envision forgetting systems, capable of forgetting certain data and their lineages, completely and quickly. This paper focuses on making learning systems forget, the process of which we call machine unlearning, or simply unlearning. We present a general, efficient unlearning approach by transforming learning algorithms used by a system into a summation form. To forget a training data sample, our approach simply updates a small number of summations – asymptotically faster than retraining from scratch. Our approach is general, because the summation form is from the statistical query learning in which many machine learning algorithms can be implemented. Our approach also applies to all stages of machine learning, including feature selection and modeling. Our evaluation, on four diverse learning systems and real-world workloads, shows that our approach is general, effective, fast, and easy to use.
Z. Abaid, M. A. Kaafar, S. Jha.  2017.  Quantifying the impact of adversarial evasion attacks on machine learning based android malware classifiers. 2017 IEEE 16th International Symposium on Network Computing and Applications (NCA). :1-10.
With the proliferation of Android-based devices, malicious apps have increasingly found their way to user devices. Many solutions for Android malware detection rely on machine learning; although effective, these are vulnerable to attacks from adversaries who wish to subvert these algorithms and allow malicious apps to evade detection. In this work, we present a statistical analysis of the impact of adversarial evasion attacks on various linear and non-linear classifiers, using a recently proposed Android malware classifier as a case study. We systematically explore the complete space of possible attacks varying in the adversary's knowledge about the classifier; our results show that it is possible to subvert linear classifiers (Support Vector Machines and Logistic Regression) by perturbing only a few features of malicious apps, with more knowledgeable adversaries degrading the classifier's detection rate from 100% to 0% and a completely blind adversary able to lower it to 12%. We show non-linear classifiers (Random Forest and Neural Network) to be more resilient to these attacks. We conclude our study with recommendations for designing classifiers to be more robust to the attacks presented in our work.
L. Chen, Y. Ye, T. Bourlai.  2017.  Adversarial Machine Learning in Malware Detection: Arms Race between Evasion Attack and Defense. 2017 European Intelligence and Security Informatics Conference (EISIC). :99-106.
Since malware has caused serious damages and evolving threats to computer and Internet users, its detection is of great interest to both anti-malware industry and researchers. In recent years, machine learning-based systems have been successfully deployed in malware detection, in which different kinds of classifiers are built based on the training samples using different feature representations. Unfortunately, as classifiers become more widely deployed, the incentive for defeating them increases. In this paper, we explore the adversarial machine learning in malware detection. In particular, on the basis of a learning-based classifier with the input of Windows Application Programming Interface (API) calls extracted from the Portable Executable (PE) files, we present an effective evasion attack model (named EvnAttack) by considering different contributions of the features to the classification problem. To be resilient against the evasion attack, we further propose a secure-learning paradigm for malware detection (named SecDefender), which not only adopts classifier retraining technique but also introduces the security regularization term which considers the evasion cost of feature manipulations by attackers to enhance the system security. Comprehensive experimental results on the real sample collections from Comodo Cloud Security Center demonstrate the effectiveness of our proposed methods.