Visible to the public Biblio

Filters: Author is Baluta, Teodora  [Clear All Filters]
2022-03-15
Baluta, Teodora, Chua, Zheng Leong, Meel, Kuldeep S., Saxena, Prateek.  2021.  Scalable Quantitative Verification for Deep Neural Networks. 2021 IEEE/ACM 43rd International Conference on Software Engineering: Companion Proceedings (ICSE-Companion). :248—249.
Despite the functional success of deep neural networks (DNNs), their trustworthiness remains a crucial open challenge. To address this challenge, both testing and verification techniques have been proposed. But these existing techniques pro- vide either scalability to large networks or formal guarantees, not both. In this paper, we propose a scalable quantitative verification framework for deep neural networks, i.e., a test-driven approach that comes with formal guarantees that a desired probabilistic property is satisfied. Our technique performs enough tests until soundness of a formal probabilistic property can be proven. It can be used to certify properties of both deterministic and randomized DNNs. We implement our approach in a tool called PROVERO1 and apply it in the context of certifying adversarial robustness of DNNs. In this context, we first show a new attack- agnostic measure of robustness which offers an alternative to purely attack-based methodology of evaluating robustness being reported today. Second, PROVERO provides certificates of robustness for large DNNs, where existing state-of-the-art verification tools fail to produce conclusive results. Our work paves the way forward for verifying properties of distributions captured by real-world deep neural networks, with provable guarantees, even where testers only have black-box access to the neural network.
2018-11-19
Baluta, Teodora, Ramapantulu, Lavanya, Teo, Yong Meng, Chang, Ee-Chien.  2017.  Modeling the Effects of Insider Threats on Cybersecurity of Complex Systems. Proceedings of the 2017 Winter Simulation Conference. :362:1–362:12.
With an increasing number of cybersecurity attacks due to insider threats, it is important to identify different attack mechanisms and quantify them to ease threat mitigation. We propose a discrete-event simulation model to study the impact of unintentional insider threats on the overall system security by representing time-varying human behavior using two parameters, user vulnerability and user interactions. In addition, the proposed approach determines the futuristic impact of such behavior on overall system health. We illustrate the ease of applying the proposed simulation model to explore several "what-if" analysis for an example enterprise system and derive the following useful insights, (i) user vulnerability has a bigger impact on overall system health compared to user interactions, (ii) the impact of user vulnerability depends on the system topology, and (ii) user interactions increases the overall system vulnerability due to the increase in the number of attack paths via credential leakage.