CORE

group_project

Visible to the public SaTC: CORE: Small: Collaborative: Oblivious ISAs for Secure and Efficient Enclave Programming

Computing on personal data is critical for both personal and social good. For example, we write programs that predict early onset medical conditions and detect the spread of diseases before they become epidemics. However, such computing is fraught with privacy concerns because programs, and the hardware they run on, create a trail of clues that an attacker can observe to reconstruct personal data without ever seeing the data directly. This project will create computer systems that proactively leave no clues, i.e., no side-effects that can leak personal secrets.

group_project

Visible to the public SaTC: CORE: Small: Usable Key Management and Forward Secrecy for Secure Email

Sending and receiving information securely online is a basic need in our connected world. However, one of the most frequently used online applications, email, remains largely insecure for all but the most expert users. The researchers will gather data to better understand why users do not adopt secure email. They will also identify the most practical, usable practices for users to safeguard their secure email from hackers, and make sure they do not lose access to their secure email by forgetting the password or key that unlocks their sensitive emails.

group_project

Visible to the public SaTC: CORE: Small: An Attribute-based Insider Threat Mitigation Framework

Defending against a malicious insider who attempts to abuse his computer privileges is one of the most critical problems facing the information security segment. This is because the damage inflicted is potentially catastrophic. While the insider threat is of increasing interest in the research community, major challenges remain in addressing aspects specific to information infrastructure protection. This project aims to develop an innovative, demonstrable approach to mitigate insider threats to an organization.

group_project

Visible to the public SaTC: CORE: Small: Adversarial ML in Traffic Analysis

Surveillance and tracking on the Internet are growing more pervasive and threaten privacy and freedom of expression. The Tor anonymity system protects the privacy of millions of users, including ordinary citizens, journalists, whistle-blowers, military intelligence, police, businesses, and people living under censorship and surveillance. Unfortunately, Tor is vulnerable to website fingerprinting (WF) attacks in which an eavesdropper uses a machine learning (ML) classifier to identify which website the user is visiting from its traffic patterns.

group_project

Visible to the public SaTC: CORE: Small: Characterizing Architectural Vulnerabilities

Software architecture plays a fundamental role in addressing security requirements by enforcing the necessary authentication, authorization, confidentiality, data integrity, privacy, accountability, availability and non-repudiation requirements, even when the system is under attack. Therefore, a design flaw in a software system's architecture could lead to attacks with enormous consequences. Most of the research, techniques, and tools that address security focus on secure coding.

group_project

Visible to the public SaTC: CORE: Small: Collaborative: Building Sophisticated Services with Programmable Anonymity Networks

This project designs and implements programmable system elements to be run within the anonymity networks, such as Tor (The Onion Routing network). The central idea is that users can inject new code into the network that is then run within a protected execution environment. The motivation is to enable the creation of new and significantly enhanced anonymity services, such as content distribution networks, of use in today's and future anonymity networks.

group_project

Visible to the public SaTC: CORE: Small: Wireless Hardware Analog Encryption for Secure, Ultra Low Power Transmission of Data

Data encryption is a process to transmit sensitive information over a public channel such as a wireless channel, so that only authorized receivers can access it. Unfortunately, digital encryption techniques typically require the use of microprocessors which are power-hungry devices. This project advances the use of alternative analog encryption techniques, such as chaotic encryption. Since analog encryption techniques are more power efficient, this approach makes it possible to avoid the use of microprocessors and consequently facilitate the realization of more portable devices.

group_project

Visible to the public SaTC: CORE: Small: Data-Driven Study of Attacks on Cyber-Physical Infrastructure Supporting Large Computing Systems

This project addresses security attacks that: (i) masquerade as failures and (ii) are delivered via self-learning malware that monitors the target system and launches the attack at a time and system location to have a maximal impact, by injecting a strategic failure. The target systems are cyber-physical systems (CPS) that manage or control large computing enterprises (e.g., the cooling or power distribution of high-performance system or cloud infrastructure).

group_project

Visible to the public SaTC: CORE: Small: Online Malicious Intent Inference for Safe CPS Operations under Cyber-attacks

Modern autonomous vehicles are not built with security in mind. The increased sensing, computation, control capabilities, and task complexity have introduced security concerns beyond traditional cyber-attacks. By injecting malformed data, by spoofing sensors, by tampering with controllers, and even by manipulating the environment, an attacker can compromise the integrity and even take control over the functionality of such cyber-physical systems.

group_project

Visible to the public SaTC: CORE: Small: Adversarial Learning via Modeling Interpretation

Machine learning (ML) models are increasingly important in society, with applications including malware detection, online content filtering and ranking, and self-driving cars. However, these models are vulnerable to adversaries attacking them by submitting incorrect or manipulated data with the goal of causing errors, causing potential harm to both the decisions the models make and the systems and people who rely on them.