Visible to the public Biblio

Found 16998 results

Book
Merzdovnik, G., Huber, M., Buhov, D., Nikiforakis, N., Neuner, S., Schmiedecker, M., Weippl, E..  2017.  Block Me If You Can: A Large-Scale Study of Tracker-Blocking Tools - IEEE Conference Publication.

In this paper, we quantify the effectiveness of third-party tracker blockers on a large scale. First, we analyze the architecture of various state-of-the-art blocking solutions and discuss the advantages and disadvantages of each method. Second, we perform a two-part measurement study on the effectiveness of popular tracker-blocking tools. Our analysis quantifies the protection offered against trackers present on more than 100,000 popular websites and 10,000 popular Android applications. We provide novel insights into the ongoing arms race between trackers and developers of blocking tools as well as which tools achieve the best results under what circumstances. Among others, we discover that rule-based browser extensions outperform learning-based ones, trackers with smaller footprints are more successful at avoiding being blocked, and CDNs pose a major threat towards the future of tracker-blocking tools. Overall, the contributions of this paper advance the field of web privacy by providing not only the largest study to date on the effectiveness of tracker-blocking tools, but also by highlighting the most pressing challenges and privacy issues of third-party tracking.
 

Sardar, Muhammad, Fetzer, Christof.  2021.  Confidential Computing and Related Technologies: A Review.
With a broad spectrum of technologies for the protection of personal data, it is important to be able to reliably compare these technologies to choose the most suitable one for a given problem. Although technologies like Homomorphic Encryption have matured over decades, Confidential Computing is still in its infancy with not only informal but also incomplete and even conflicting definitions by the Confidential Computing Consortium (CCC). In this work, we present key issues in definitions and comparison among existing technologies by CCC. We also provide recommendations to achieve clarity and precision in the definitions as well as fair and scientific comparison among existing technologies. We emphasize on the need of formal definitions of the terms and pose it as an open challenge to the community.
Robling Denning, Dorothy Elizabeth.  1982.  Cryptography and Data Security. :414.

Electronic computers have evolved from exiguous experimental enterprises in the 1940s to prolific practical data processing systems in the 1980s. As we have come to rely on these systems to process and store data, we have also come to wonder about their ability to protect valuable data.

Data security is the science and study of methods of protecting data in computer and communication systems from unauthorized disclosure and modification. The goal of this book is to introduce the mathematical principles of data security and to show how these principles apply to operating systems, database systems, and computer networks. The book is for students and professionals seeking an introduction to these principles. There are many references for those who would like to study specific topics further.

Data security has evolved rapidly since 1975. We have seen exciting developments in cryptography: public-key encryption, digital signatures, the Data Encryption Standard (DES), key safeguarding schemes, and key distribution protocols. We have developed techniques for verifying that programs do not leak confidential data, or transmit classified data to users with lower security clearances. We have found new controls for protecting data in statistical databases--and new methods of attacking these databases. We have come to a better understanding of the theoretical and practical limitations to security.

This article was identified by the SoS Best Scientific Cybersecurity Paper Competition Distinguished Experts as a Science of Security Significant Paper. The Science of Security Paper Competition was developed to recognize and honor recently published papers that advance the science of cybersecurity. During the development of the competition, members of the Distinguished Experts group suggested that listing papers that made outstanding contributions, empirical or theoretical, to the science of cybersecurity in earlier years would also benefit the research community.

Nazli Choucri.  2012.  Cyberpolitics in International Relations.
An examination of the ways cyberspace is changing both the theory and the practice of international relations. Cyberspace is widely acknowledged as a fundamental fact of daily life in today's world. Until recently, its political impact was thought to be a matter of low politics—background conditions and routine processes and decisions. Now, however, experts have begun to recognize its effect on high politics—national security, core institutions, and critical decision processes. In this book, Nazli Choucri investigates the implications of this new cyberpolitical reality for international relations theory, policy, and practice. The ubiquity, fluidity, and anonymity of cyberspace have already challenged such concepts as leverage and influence, national security and diplomacy, and borders and boundaries in the traditionally state-centric arena of international relations. Choucri grapples with fundamental questions of how we can take explicit account of cyberspace in the analysis of world politics and how we can integrate the traditional international system with its cyber venues. After establishing the theoretical and empirical terrain, Choucri examines modes of cyber conflict and cyber cooperation in international relations; the potential for the gradual convergence of cyberspace and sustainability, in both substantive and policy terms; and the emergent synergy of cyberspace and international efforts toward sustainable development. Choucri's discussion is theoretically driven and empirically grounded, drawing on recent data and analyzing the dynamics of cyberpolitics at individual, state, international, and global levels.
S. Petcy Carolin, M. Somasundaram.  2016.  Data loss protection and data security using agents for cloud environment - IEEE Conference Publication.

Cyber infrastructures are highly vulnerable to intrusions and other threats. The main challenges in cloud computing are failure of data centres and recovery of lost data and providing a data security system. This paper has proposed a Virtualization and Data Recovery to create a virtual environment and recover the lost data from data servers and agents for providing data security in a cloud environment. A Cloud Manager is used to manage the virtualization and to handle the fault. Erasure code algorithm is used to recover the data which initially separates the data into n parts and then encrypts and stores in data servers. The semi trusted third party and the malware changes made in data stored in data centres can be identified by Artificial Intelligent methods using agents. Java Agent Development Framework (JADE) is a tool to develop agents and facilitates the communication between agents and allows the computing services in the system. The framework designed and implemented in the programming language JAVA as gateway or firewall to recover the data loss.
 

Sardar, Muhammad, Musaev, Saidgani, Fetzer, Christof.  2021.  Demystifying Attestation in Intel Trust Domain Extensions via Formal Verification.
In August 2020, Intel asked the research community for feedback on the newly offered architecture extensions, called Intel Trust Domain Extensions (TDX), which give more control to Trust Domains (TDs) over processor resources. One of the key features of these extensions is the remote attestation mechanism, which provides a unified report verification mechanism for TDX and its predecessor Software Guard Extensions (SGX). Based on our experience and intuition, we respond to the request for feedback by formally specifying the attestation mechanism in the TDX using ProVerif's specification language. Although the TDX technology seems very promising, the process of formal specification reveals a number of subtle discrepancies in Intel's specifications that could potentially lead to design and implementation flaws. After resolving these discrepancies, we also present fully automated proofs that our specification of TD attestation preserves the confidentiality of the secret and authentication of the report by considering the state-of-the-art Dolev-Yao adversary in the symbolic model using ProVerif. We have submitted the draft to Intel, and Intel is in the process of making the changes.
Honggang, Zhao, Chen, Shi, Leyu, Zhai.  2018.  Design and Implementation of Lightweight 6LoWPAN Gateway Based on Contiki - IEEE Conference Publication.

6LoWPAN technology realizes the IPv6 packet transmission in the IEEE 802.15.4 based WSN. And 6LoWPAN is regarded as one of the ideal technologies to realize the interconnection between WSN and Internet, which is the key to build the IoT. Contiki is an open source and highly portable multitasking operating system, in which the 6LoWPAN has been implemented. In contiki, only several K Bytes of code and a few hundred bytes of memory are required to provide a multitasking environment and built-in TCP/IP support. This makes it especially suitable for memory constrained embedded platforms. In this paper, a lightweight 6LoWPAN gateway based on Contiki is designed and its designs of hardware and software are described. A complex experiment environment is presented, in which the gateway's capability of accessing the Internet is verified, and its performance about the average network delay and jitter are analyzed. The experimental results show that the gateway designed in this paper can not only realize the interconnection between 6LoWPAN networks and Internet, but also have good network adaptability and stability.

Tsuyoshi Arai, Yasuo Okabe, Yoshinori Matsumoto, Koji Kawamura.  2020.  Detection of Bots in CAPTCHA as a Cloud Service Utilizing Machine Learning.

In recent years, the damage caused by unauthorized access using bots has increased. Compared with attacks on conventional login screens, the success rate is higher and detection of them is more difficult. CAPTCHA is commonly utilized as a technology for avoiding attacks by bots. But user's experience declines as the difficulty of CAPTCHA becomes higher corresponding to the advancement of the bot. As a solution, adaptive difficulty setting of CAPTCHA combining with bot detection technologies is considered. In this research, we focus on Capy puzzle CAPTCHA, which is widely used in commercial service. We use a supervised machine learning approach to detect bots. As a training data, we use access logs to several Web services, and add flags to attacks by bots detected in the past. We have extracted vectors fields like HTTP-User-Agent and some information from IP address (e.g. geographical information) from the access logs, and the dataset is investigated using supervised learning. By using XGBoost and LightGBM, we have achieved high ROC-AUC score more than 0.90, and further have detected suspicious accesses from some ISPs that has no bot discrimination flag.

Dykstra, J..  2015.  Essential Cybersecurity Science: Build, Test, and Evaluate Secure Systems. :190.

If you’re involved in cybersecurity as a software developer, forensic investigator, or network administrator, this practical guide shows you how to apply the scientific method when assessing techniques for protecting your information systems. You’ll learn how to conduct scientific experiments on everyday tools and procedures, whether you’re evaluating corporate security systems, testing your own security product, or looking for bugs in a mobile game.

Once author Josiah Dykstra gets you up to speed on the scientific method, he helps you focus on standalone, domain-specific topics, such as cryptography, malware analysis, and system security engineering. The latter chapters include practical case studies that demonstrate how to use available tools to conduct domain-specific scientific experiments.

  • Learn the steps necessary to conduct scientific experiments in cybersecurity
  • Explore fuzzing to test how your software handles various inputs
  • Measure the performance of the Snort intrusion detection system
  • Locate malicious “needles in a haystack” in your network and IT environment
  • Evaluate cryptography design and application in IoT products
  • Conduct an experiment to identify relationships between similar malware binaries
  • Understand system-level security requirements for enterprise networks and web services
Sardar, Muhammad, Faqeh, Rasha, Fetzer, Christof.  2020.  Formal Foundations for Intel SGX Data Center Attestation Primitives.
Intel has recently offered third-party attestation services, called Data Center Attestation Primitives (DCAP), for a data center to create its own attestation infrastructure. These services address the availability concerns and improve the performance as compared to the remote attestation based on Enhanced Privacy ID (EPID). Practical developments, such as Hyperledger Avalon, have already planned to support DCAP in their roadmap. However, the lack of formal proof for DCAP leads to security concerns. To fill this gap, we propose an automated, rigorous, and sound formal approach to specify and verify the remote at-testation based on Intel SGX DCAP under the assumption that there are no side-channel attacks and no vulnerabilities inside the enclave. In the proposed approach, the data center configuration and operational policies are specified to generate the symbolic model, and security goals are specified as security properties to produce verification results. The evaluation of non-Quoting Verification Enclave-based DCAP indicates that the confidentiality of secrets and integrity of data is preserved against a Dolev-Yao adversary in this technology. We also present a few of the many inconsistencies found in the existing literature on Intel SGX DCAP during formal specification.
Sardar, Muhammad, Fetzer, Christof.  2022.  Formal Foundations for SCONE attestation and Intel SGX Data Center Attestation Primitives.
One of the essential features of confidential computing is the ability to attest to an application remotely. Remote attestation ensures that the right code is running in the correct environment. We need to ensure that all components that an adversary might use to impact the integrity, confidentiality, and consistency of an application are attested. Which components need to be attested is defined with the help of a policy. Verification of the policy is performed with the help of an attestation engine. Since remote attestation bootstraps the trust in remote applications, any vulnerability in the attestation mechanism can therefore impact the security of an application. Moreover, mistakes in the attestation policy can result in data, code, and secrets being vulnerable. Our work focuses on 1) how we can verify the attestation mechanisms and 2) how to verify the policy to ensure that data, code, and secrets are always protected.
Ibrahim, Rosziati, Omotunde, Habeeb.  2017.  A Hybrid Threat Model for Software Security Requirement Specification - IEEE Conference Publication.

Security is often treated as secondary or a non- functional feature of software which influences the approach of vendors and developers when describing their products often in terms of what it can do (Use Cases) or offer customers. However, tides are beginning to change as more experienced customers are beginning to demand for more secure and reliable software giving priority to confidentiality, integrity and privacy while using these applications. This paper presents the MOTH (Modeling Threats with Hybrid Techniques) framework designed to help organizations secure their software assets from attackers in order to prevent any instance of SQL Injection Attacks (SQLIAs). By focusing on the attack vectors and vulnerabilities exploited by the attackers and brainstorming over possible attacks, developers and security experts can better strategize and specify security requirements required to create secure software impervious to SQLIAs. A live web application was considered in this research work as a case study and results obtained from the hybrid models extensively exposes the vulnerabilities deep within the application and proposed resolution plans for blocking those security holes exploited by SQLIAs.
 

Choucri, Nazli, Clark, David D..  2019.  International Relations in the Cyber Age: The Co-Evolution Dilemma.
A foundational analysis of the co-evolution of the internet and international relations, examining resultant challenges for individuals, organizations, firms, and states. In our increasingly digital world, data flows define the international landscape as much as the flow of materials and people. How is cyberspace shaping international relations, and how are international relations shaping cyberspace? In this book, Nazli Choucri and David D. Clark offer a foundational analysis of the co-evolution of cyberspace (with the internet at its core) and international relations, examining resultant challenges for individuals, organizations, and states. The authors examine the pervasiveness of power and politics in the digital realm, finding that the internet is evolving much faster than the tools for regulating it. This creates a “co-evolution dilemma”—a new reality in which digital interactions have enabled weaker actors to influence or threaten stronger actors, including the traditional state powers. Choucri and Clark develop a new method for addressing control in the internet age, “control point analysis,” and apply it to a variety of situations, including major actors in the international and digital realms: the United States, China, and Google. In doing so they lay the groundwork for a new international relations theory that reflects the reality in which we live—one in which the international and digital realms are inextricably linked and evolving together.
Mada, Bharat B., Banik, Manoj, Wu, Bo Chen, Bein, Doina.  2016.  Intrusion Tolerant Multi-cloud Storage - IEEE Conference Publication.

Data generation and its utilization in important decision applications has been growing an extremely fast pace, which has made data a valuable resource that needs to be rigorously protected from attackers. Cloud storage systems claim to offer the promise of secure and elastic data storage services that can adapt to changing storage requirements. Despite diligent efforts being made to protect data, recent successful attacks highlight the need for going beyond the existing approaches centered on intrusion prevention, detection and recovery mechanisms. However, most security mechanisms have finite rate of failure, and with intrusion becoming more sophisticated and stealthy, the failure rate appears to be rising. In this paper we propose the use data fragmentation, followed by coding that introduces redundant fragments and dispersing fragments to multiple and independent cloud storage systems with each cloud handling only a single fragments. The paper proposes a multi-cloud fragmented cloud storage system architecture and design of the related software code. Probabilistic analysis is carried to quantify its intrusion tolerance abilities.
 

[Anonymous].  Submitted.  Natural Language Processing Characterization of Recurring Calls in Public Security Services.
Extracting knowledge from unstructured data silos, a legacy of old applications, is mandatory for improving the governance of today's cities and fostering the creation of smart cities. Texts in natural language often compose such data. Nevertheless, the inference of useful information from a linguistic-computational analysis of natural language data is an open challenge. In this paper, we propose a clustering method to analyze textual data employing the unsupervised machine learning algorithms k-means and hierarchical clustering. We assess different vector representation methods for text, similarity metrics, and the number of clusters that best matches the data. We evaluate the methods using a real database of a public record service of security occurrences. The results show that the k-means algorithm using Euclidean distance extracts non-trivial knowledge, reaching up to 93% accuracy in a set of test samples while identifying the 12 most prevalent occurrence patterns.
[Anonymous].  2019.  PhishFarm: A Scalable Framework for Measuring the Effectiveness of Evasion Techniques against Browser Phishing Blacklists - IEEE Conference Publication.

Phishing attacks have reached record volumes in recent years. Simultaneously, modern phishing websites are growing in sophistication by employing diverse cloaking techniques to avoid detection by security infrastructure. In this paper, we present PhishFarm: a scalable framework for methodically testing the resilience of anti-phishing entities and browser blacklists to attackers' evasion efforts. We use PhishFarm to deploy 2,380 live phishing sites (on new, unique, and previously-unseen .com domains) each using one of six different HTTP request filters based on real phishing kits. We reported subsets of these sites to 10 distinct anti-phishing entities and measured both the occurrence and timeliness of native blacklisting in major web browsers to gauge the effectiveness of protection ultimately extended to victim users and organizations. Our experiments revealed shortcomings in current infrastructure, which allows some phishing sites to go unnoticed by the security community while remaining accessible to victims. We found that simple cloaking techniques representative of real-world attacks- including those based on geolocation, device type, or JavaScript- were effective in reducing the likelihood of blacklisting by over 55% on average. We also discovered that blacklisting did not function as intended in popular mobile browsers (Chrome, Safari, and Firefox), which left users of these browsers particularly vulnerable to phishing attacks. Following disclosure of our findings, anti-phishing entities are now better able to detect and mitigate several cloaking techniques (including those that target mobile users), and blacklisting has also become more consistent between desktop and mobile platforms- but work remains to be done by anti-phishing entities to ensure users are adequately protected. Our PhishFarm framework is designed for continuous monitoring of the ecosystem and can be extended to test future state-of-the-art evasion techniques used by malicious websites.

Parno, Bryan Jeffery.  2014.  Trust Extension As a Mechanism for Secure Code Execution on Commodity Computers.

From the Preface

As society rushes to digitize sensitive information and services, it is imperative that we adopt adequate security protections. However, such protections fundamentally conflict with the benefits we expect from commodity computers. In other words, consumers and businesses value commodity computers because they provide good performance and an abundance of features at relatively low costs. Meanwhile, attempts to build secure systems from the ground up typically abandon such goals, and hence are seldom adopted [Karger et al. 1991, Gold et al. 1984, Ames 1981].

In this book, a revised version of my doctoral dissertation, originally written while studying at Carnegie Mellon University, I argue that we can resolve the tension between security and features by leveraging the trust a user has in one device to enable her to securely use another commodity device or service, without sacrificing the performance and features expected of commodity systems.We support this premise over the course of the following chapters.

Introduction. This chapter introduces the notion of bootstrapping trust from one device or service to another and gives an overview of how the subsequent chapters fit together.
Background and related work. This chapter focuses on existing techniques for bootstrapping trust in commodity computers, specifically by conveying information about a computer's current execution environment to an interested party. This would, for example, enable a user to verify that her computer is free of malware, or that a remote web server will handle her data responsibly. 
Bootstrapping trust in a commodity computer. At a high level, this chapter develops techniques to allow a user to employ a small, trusted, portable device to securely learn what code is executing on her local computer. While the problem is simply stated, finding a solution that is both secure and usable with existing hardware proves quite difficult.
On-demand secure code execution. Rather than entrusting a user's data to the mountain of buggy code likely running on her computer, in this chapter, we construct an on-demand secure execution environment which can perform security sensitive tasks and handle private data in complete isolation from all other software (and most hardware) on the system. Meanwhile, non-security-sensitive software retains the same abundance of features and performance it enjoys today.
Using trustworthy host data in the network. Having established an environment for secure code execution on an individual computer, this chapter shows how to extend trust in this environment to network elements in a secure and efficient manner. This allows us to reexamine the design of network protocols and defenses, since we can now execute code on end hosts and trust the results within the network.
Secure code execution on untrusted hardware. Lastly, this chapter extends the user's trust one more step to encompass computations performed on a remote host (e.g., in the cloud).We design, analyze, and prove secure a protocol that allows a user to outsource arbitrary computations to commodity computers run by an untrusted remote party (or parties) who may subject the computers to both software and hardware attacks. Our protocol guarantees that the user can both verify that the results returned are indeed the correct results of the specified computations on the inputs provided, and protect the secrecy of both the inputs and outputs of the computations. These guarantees are provided in a non-interactive, asymptotically optimal (with respect to CPU and bandwidth) manner. 

Thus, extending a user's trust, via software, hardware, and cryptographic techniques, allows us to provide strong security protections for both local and remote computations on sensitive data, while still preserving the performance and features of commodity computers.

Parno, Bryan Jeffery.  2014.  Trust Extension As a Mechanism for Secure Code Execution on Commodity Computers.

From the Preface

As society rushes to digitize sensitive information and services, it is imperative that we adopt adequate security protections. However, such protections fundamentally conflict with the benefits we expect from commodity computers. In other words, consumers and businesses value commodity computers because they provide good performance and an abundance of features at relatively low costs. Meanwhile, attempts to build secure systems from the ground up typically abandon such goals, and hence are seldom adopted [Karger et al. 1991, Gold et al. 1984, Ames 1981].

In this book, a revised version of my doctoral dissertation, originally written while studying at Carnegie Mellon University, I argue that we can resolve the tension between security and features by leveraging the trust a user has in one device to enable her to securely use another commodity device or service, without sacrificing the performance and features expected of commodity systems.We support this premise over the course of the following chapters.

Introduction. This chapter introduces the notion of bootstrapping trust from one device or service to another and gives an overview of how the subsequent chapters fit together.
Background and related work. This chapter focuses on existing techniques for bootstrapping trust in commodity computers, specifically by conveying information about a computer's current execution environment to an interested party. This would, for example, enable a user to verify that her computer is free of malware, or that a remote web server will handle her data responsibly. 
Bootstrapping trust in a commodity computer. At a high level, this chapter develops techniques to allow a user to employ a small, trusted, portable device to securely learn what code is executing on her local computer. While the problem is simply stated, finding a solution that is both secure and usable with existing hardware proves quite difficult.
On-demand secure code execution. Rather than entrusting a user's data to the mountain of buggy code likely running on her computer, in this chapter, we construct an on-demand secure execution environment which can perform security sensitive tasks and handle private data in complete isolation from all other software (and most hardware) on the system. Meanwhile, non-security-sensitive software retains the same abundance of features and performance it enjoys today.
Using trustworthy host data in the network. Having established an environment for secure code execution on an individual computer, this chapter shows how to extend trust in this environment to network elements in a secure and efficient manner. This allows us to reexamine the design of network protocols and defenses, since we can now execute code on end hosts and trust the results within the network.
Secure code execution on untrusted hardware. Lastly, this chapter extends the user's trust one more step to encompass computations performed on a remote host (e.g., in the cloud).We design, analyze, and prove secure a protocol that allows a user to outsource arbitrary computations to commodity computers run by an untrusted remote party (or parties) who may subject the computers to both software and hardware attacks. Our protocol guarantees that the user can both verify that the results returned are indeed the correct results of the specified computations on the inputs provided, and protect the secrecy of both the inputs and outputs of the computations. These guarantees are provided in a non-interactive, asymptotically optimal (with respect to CPU and bandwidth) manner. 

Thus, extending a user's trust, via software, hardware, and cryptographic techniques, allows us to provide strong security protections for both local and remote computations on sensitive data, while still preserving the performance and features of commodity computers.

Dylan Wang, Melody Moh, Teng-Sheng Moh.  2020.  Using Deep Learning to Solve Google reCAPTCHA v2’s Image Challenges.

The most popular CAPTCHA service in use today is Google reCAPTCHA v2, whose main offering is an image-based CAPTCHA challenge. This paper looks into the security measures used in reCAPTCHA v2's image challenges and proposes a deep learning-based solution that can be used to automatically solve them. The proposed method is tested with both a custom object- detection deep learning model as well as Google's own Cloud Vision API, in conjunction with human mimicking mouse movements to bypass the challenges. The paper also suggests some potential defense measures to increase overall security and other additional attack directions for reCAPTCHA v2.

Vaibhavi Deshmukh, Swarnima Deshmukh, Shivani Deosatwar, Reva Sarda, Lalit Kulkarni.  2020.  Versatile CAPTCHA Generation Using Machine Learning and Image Processing.

Due to the significant increase in the size of the internet and the number of users on this platform there has been a tremendous increase in load on various websites and web-based applications. This load is from the user end which causes unforeseen conditions which leads to unacceptable consequences such as crash or a data loss scenario at the webserver end. Therefore, there is a need to reduce the load on the server as well as the chances of network attacks that increase with the increased user base. The undue consequences such as data loss and server crash are caused due to two main reasons: the first one being an overload of users and the second due to an increased number of automatic programs or robots. A technique can be utilized to overcome this scenario by introducing a delay in the operation speed on the user end through the use of a CAPTCHA mechanism. Most of the classical approaches use a single method for the generation of the CAPTCHA, to overcome this proposed model uses the versatile image CAPTCHA generation mechanism. We have introduced a system that utilizes manualbased, face detection-based, colour based and random object insertion technique to generate 4 different random types of CAPTCHA. The proposed methodology implements a region of interest and convolutional neural networks to achieve the generation of the CAPTCHA effectively.

Book Chapter
Nazli Choucri.  2021.  CyberIR@MIT: Exploration & Innovation in International Relations. Remaking the World: Toward an Age of Global Enlightenment. :27–43.
Advances in information and communication technologies – global Internet, social media, Internet of Things, and a range of related science-driven innovations and generative and emergent technologies – continue to shape a dynamic communication and information ecosystem for which there is no precedent. These advances are powerful in many ways. Foremost among these in terms of salience, ubiquity, pervasiveness, and expansion in scale and scope is the broad area of artificial intelligence. They have created a new global ecology; yet they remain opaque and must be better understood—an ecology of “knowns” that is evolving in ways that remain largely “unknown.” Especially compelling is the acceleration of Artificial Intelligence – in all its forms – with far-ranging applications shaping a new global ecosystem for which there is no precedent. This chapter presents a brief view of the most pressing challenges, articulates the logic for worldwide agreement to retain the rule of law in the international system, and presents salient features of an emergent International Accord on Artificial Intelligence. The Framework for Artificial Intelligence International Accord (AIIA) is an initial response to this critical gap in the system of international rules and regulations.
Chuchu Fan, Sayan Mitra.  2019.  Data-Driven Safety Verification of Complex Cyber-Physical Systems. Design Automation of Cyber-Physical Systems. :107–142.

Data-driven verification methods utilize execution data together with models for establishing safety requirements. These are often the only tools available for analyzing complex, nonlinear cyber-physical systems, for which purely model-based analysis is currently infeasible. In this chapter, we outline the key concepts and algorithmic approaches for data-driven verification and discuss the guarantees they provide. We introduce some of the software tools that embody these ideas and present several practical case studies demonstrating their application in safety analysis of autonomous vehicles, advanced driver assist systems (ADAS), satellite control, and engine control systems.

Nan, Satyaki, Brahma, Swastik, Kamhoua, Charles A., Njilla, Laurent L..  2020.  On Development of a Game‐Theoretic Model for Deception‐Based Security. Modeling and Design of Secure Internet of Things. :123–140.
This chapter presents a game‐theoretic model to analyze attack–defense scenarios that use fake nodes (computing devices) for deception under consideration of the system deploying defense resources to protect individual nodes in a cost‐effective manner. The developed model has important applications in the Internet of Battlefield Things (IoBT). Our game‐theoretic model illustrates how the concept of the Nash equilibrium can be used by the defender to intelligently choose which nodes should be used for performing a computation task while deceiving the attacker into expending resources for attacking fake nodes. Our model considers the fact that defense resources may become compromised under an attack and suggests that the defender, in a probabilistic manner, may utilize unprotected nodes for performing a computation while the attacker is deceived into attacking a node with defense resources installed. The chapter also presents a deception‐based strategy to protect a target node that can be accessed via a tree network. Numerical results provide insights into the strategic deception techniques presented in this chapter.