Biblio

Filters: Author is Kantarcioglu, Murat  [Clear All Filters]
2020-04-03
Kantarcioglu, Murat, Shaon, Fahad.  2019.  Securing Big Data in the Age of AI. 2019 First IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA). :218—220.

Increasingly organizations are collecting ever larger amounts of data to build complex data analytics, machine learning and AI models. Furthermore, the data needed for building such models may be unstructured (e.g., text, image, and video). Hence such data may be stored in different data management systems ranging from relational databases to newer NoSQL databases tailored for storing unstructured data. Furthermore, data scientists are increasingly using programming languages such as Python, R etc. to process data using many existing libraries. In some cases, the developed code will be automatically executed by the NoSQL system on the stored data. These developments indicate the need for a data security and privacy solution that can uniformly protect data stored in many different data management systems and enforce security policies even if sensitive data is processed using a data scientist submitted complex program. In this paper, we introduce our vision for building such a solution for protecting big data. Specifically, our proposed system system allows organizations to 1) enforce policies that control access to sensitive data, 2) keep necessary audit logs automatically for data governance and regulatory compliance, 3) sanitize and redact sensitive data on-the-fly based on the data sensitivity and AI model needs, 4) detect potentially unauthorized or anomalous access to sensitive data, 5) automatically create attribute-based access control policies based on data sensitivity and data type.

2022-04-20
Giraldo, Jairo, Cardenas, Alvaro, Kantarcioglu, Murat.  2017.  Security and Privacy Trade-Offs in CPS by Leveraging Inherent Differential Privacy. 2017 IEEE Conference on Control Technology and Applications (CCTA). :1313–1318.
Cyber-physical systems are subject to natural uncertainties and sensor noise that can be amplified/attenuated due to feedback. In this work, we want to leverage these properties in order to define the inherent differential privacy of feedback-control systems without the addition of an external differential privacy noise. If larger levels of privacy are required, we introduce a methodology to add an external differential privacy mechanism that injects the minimum amount of noise that is needed. On the other hand, we show how the combination of inherent and external noise affects system security in terms of the impact that integrity attacks can impose over the system while remaining undetected. We formulate a bilevel optimization problem to redesign the control parameters in order to minimize the attack impact for a desired level of inherent privacy.
2018-05-11
2017-05-22
Kantarcioglu, Murat, Xi, Bowei.  2016.  Adversarial Data Mining: Big Data Meets Cyber Security. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :1866–1867.

As more and more cyber security incident data ranging from systems logs to vulnerability scan results are collected, manually analyzing these collected data to detect important cyber security events become impossible. Hence, data mining techniques are becoming an essential tool for real-world cyber security applications. For example, a report from Gartner [gartner12] claims that "Information security is becoming a big data analytics problem, where massive amounts of data will be correlated, analyzed and mined for meaningful patterns". Of course, data mining/analytics is a means to an end where the ultimate goal is to provide cyber security analysts with prioritized actionable insights derived from big data. This raises the question, can we directly apply existing techniques to cyber security applications? One of the most important differences between data mining for cyber security and many other data mining applications is the existence of malicious adversaries that continuously adapt their behavior to hide their actions and to make the data mining models ineffective. Unfortunately, traditional data mining techniques are insufficient to handle such adversarial problems directly. The adversaries adapt to the data miner's reactions, and data mining algorithms constructed based on a training dataset degrades quickly. To address these concerns, over the last couple of years new and novel data mining techniques which is more resilient to such adversarial behavior are being developed in machine learning and data mining community. We believe that lessons learned as a part of this research direction would be beneficial for cyber security researchers who are increasingly applying machine learning and data mining techniques in practice. To give an overview of recent developments in adversarial data mining, in this three hour long tutorial, we introduce the foundations, the techniques, and the applications of adversarial data mining to cyber security applications. We first introduce various approaches proposed in the past to defend against active adversaries, such as a minimax approach to minimize the worst case error through a zero-sum game. We then discuss a game theoretic framework to model the sequential actions of the adversary and the data miner, while both parties try to maximize their utilities. We also introduce a modified support vector machine method and a relevance vector machine method to defend against active adversaries. Intrusion detection and malware detection are two important application areas for adversarial data mining models that will be discussed in details during the tutorial. Finally, we discuss some practical guidelines on how to use adversarial data mining ideas in generic cyber security applications and how to leverage existing big data management tools for building data mining algorithms for cyber security.

2017-09-26
Fernández, Maribel, Kantarcioglu, Murat, Thuraisingham, Bhavani.  2016.  A Framework for Secure Data Collection and Management for Internet of Things. Proceedings of the 2Nd Annual Industrial Control System Security Workshop. :30–37.

More and more current industrial control systems (e.g, smart grids, oil and gas systems, connected cars and trucks) have the capability to collect and transmit users' data in order to provide services that are tailored to the specific needs of the customers. Such smart industrial control systems fall into the category of Internet of Things (IoT). However, in many cases, the data transmitted by such IoT devices includes sensitive information and users are faced with an all-or-nothing choice: either they adopt the proposed services and release their private data, or refrain from using services which could be beneficial but pose significant privacy risks. Unfortunately, encryption alone does not solve the problem, though techniques to counter these privacy risks are emerging (e.g., by using applications that alter, merge or bundle data to ensure they cannot be linked to a particular user). In this paper, we propose a general framework, whereby users can not only specify how their data is managed, but also restrict data collection from their connected devices. More precisely, we propose to use data collection policies to govern the transmission of data from IoT devices, coupled with policies to ensure that once the data has been transmitted, it is stored and shared in a secure way. To achieve this goal, we have designed a framework for secure data collection, storage and management, with logical foundations that enable verification of policy properties.

2015-11-12
Xia, Weiyi, Kantarcioglu, Murat, Wan, Zhiyu, Heatherly, Raymond, Vorobeychik, Yevgeniy, Malin, Bradley.  2015.  Process-Driven Data Privacy. Proceedings of the 24th ACM International on Conference on Information and Knowledge Management. :1021–1030.

The quantity of personal data gathered by service providers via our daily activities continues to grow at a rapid pace. The sharing, and the subsequent analysis of, such data can support a wide range of activities, but concerns around privacy often prompt an organization to transform the data to meet certain protection models (e.g., k-anonymity or E-differential privacy). These models, however, are based on simplistic adversarial frameworks, which can lead to both under- and over-protection. For instance, such models often assume that an adversary attacks a protected record exactly once. We introduce a principled approach to explicitly model the attack process as a series of steps. Specically, we engineer a factored Markov decision process (FMDP) to optimally plan an attack from the adversary's perspective and assess the privacy risk accordingly. The FMDP captures the uncertainty in the adversary's belief (e.g., the number of identied individuals that match the de-identified data) and enables the analysis of various real world deterrence mechanisms beyond a traditional protection model, such as a penalty for committing an attack. We present an algorithm to solve the FMDP and illustrate its efficiency by simulating an attack on publicly accessible U.S. census records against a real identied resource of over 500,000 individuals in a voter registry. Our results demonstrate that while traditional privacy models commonly expect an adversary to attack exactly once per record, an optimal attack in our model may involve exploiting none, one, or more indiviuals in the pool of candidates, depending on context.