Breakout Sessions
Breakout Sessions
As part of the 2022 NSF Secure and Trustworthy Cyberspace PI meeting, breakout sessions were held with the goal of identifying important new challenges and trends in securing cyberspace, new directions for research, and areas in which the SaTC community can contribute to an improved future society. Breakout sessions took place over both days of the meeting, with an initial session on Day 1 (Wednesday, June 1, 1:30 p.m.- 3:00 p.m.) to discuss the topic and brainstorm and a shorter session on Day 2 (Thursday, June 2, 10:30 a.m.- 11:30 a.m.) to tighten the thinking and draft an initial report, followed by a brief report from the co-leads of each topic back to the whole group at the end of Day 2 (Thursday, June 2, 2:15 p.m. - 3:45 p.m.). There were 14 breakout topics.
SaTC PI meeting participants were asked to choose one breakout group to participate in for both breakout sessions.
Final Breakout Reports/Slides (linked below)
1. AI for Security (Report | Slides)
2. Blockchain: Cryptography Meets Economics (Report | Slides)
3. Cyber-Physical Security and Privacy (Report | Slides)
4. Defense by Deception (Report | Slides)
5. Formal Methods for Security (Report | Slides)
6. Game Theory and Distributed System Security (Report | Slides)
7. Improving the Quality and Reuse of Cybersecurity Datasets, Software, and Other Artifacts (Report | Slides)
8. Information Integrity (Report | Slides)
9. NextG and Wireless Security (Report | Slides)
10. Privacy, Policy, and People (Report | Slides)
11. Security Education (Report | Slides)
12. Security for AI (Report | Slides)
13. Security in a Post-Quantum World (Report | Slides)
14. Software and Hardware Supply Chain Security (Report | Slides)
Breakout Session Goals
Detailed goals for each of the 13 breakout topics were to address the following questions and issues:
1. What is the topic? Why is it important to society? to a secure and trustworthy cyberspace? in other ways?
2. Is there is an existing body of research and/or practice? What are some highlights or pointers to it?
3. What are important challenges that remain? Are there new challenges that have arisen based on new models, new knowledge, new technologies, new uses, etc?
4. Are there promising directions to addressing them? What kinds of expertise and collaboration is needed (disciplines and subdisciplines)?
5. Any other topic-specific questions/issues not covered by the earlier questions.
Breakout Group Topic Descriptions
1. AI for Security
Co-Leads: Sagar Samtani (Indiana University), Anita Nikolich (UIUC), Gang Wang (UIUC)
Scribes: Saptarshi Debroy (Hunter College)
Description: This breakout group sought to explore the current and future uses of Artificial Intelligence (AI) for cybersecurity, and cybersecurity for AI. We sought to identify how the SaTC community can engage research in this growing area. We also looked for input on how to cultivate the next generation of cybersecurity professionals trained in employing or developing AI for cybersecurity.
Location: Potomac I
2. Blockchain: Cryptography Meets Economics
Co-Leads: Elaine Shi (CMU), Andrew Miller (UIUC)
Scribes: Yupeng Zhang (Texas A&M), Fan Zhang (Duke), Saba Eskandarian (UNC-CH)
Description: Blockchains and smart contracts are raising interesting new challenges and promising opportunities at the intersection of cryptography and economics, as well as distributed computing and programming languages too. Our first day session began with a short panel discussion with Andrew Myers, Muthu Venikitasubramiam, Ian Miers, and Kartik Nayak. The goal of our subsequent small-group breakout discussions were to stimulate cross-disciplinary conversations, and help define open questions in this exciting space.
Location: Potomac II
3. Cyber-Physical Security and Privacy
Co-Leads: WenZhan Song (University of Georgia), Alfred Chen (UC Irvine)
Scribes: Peng Liu (Pennsylvania State University) and Sauvik Das (Georgia Institute of Technology)
Description: In cyber-physical systems (CPSs), some cyber-threats are not observable or have limited observability in cyberspace alone, and need to be detected and diagnosed through inter-dependency analysis of both cyber and physical signals. Meanwhile, CPSs also need to face security challenges from physical-layer threats such as sensor attacks and physical-world attacks, for which many mature cyber-defense methods today become fundamentally ineffective. The emerging extensive use of AI in safety-critical CPSs such as autonomous driving systems further exacerbates such security challenges due to the lack of intepretability and formal specifications of the CPS AI components. Such substantial research gaps at CPS, AI, and security fronts call for fundamentally new security design strategies, theories, and principles. Meanwhile, there is a significant opportunity in exploring physical signals, together with cyber signals, to advance cyber-physical security and privacy. Also, zero-trust solutions need to be created in both IT and OT space without sacrificing efficiency and scalability.
Location: Potomac III
4. Defense by Deception
Co-Leads: Mark Grechanik (University of Illinois, Chicago)
Scribe: Susan E. McGregor (Columbia)
Description: Deception is the process by which agents choose actions to manipulate beliefs of the other agents so as to take advantage of their erroneous inferences. Each agent in the cybersecurity game acts based on its belief state, which consists of a logically closed set of sentences. In computer security, in general, attackers use deception as a tool to penetrate defenses, e.g., to gain passwords from unsuspecting victims using phishing emails. Malicious application use deception to make victims believe that they will receive certain services, which are used to mask the malicious intent. Consequently, it very difficult to determine if an application is malicious using various program analyses or machine learning algorithms, since these heuristics must recognize that some sequences of instructions realize malicious behavior, which is an undecidable problem in general.
Turning deception into a weapon against attackers has been something of a Holy Grail in the field of cybersecurity. Major applications of defense by deception (DbD) in computer security are in moving target defenses and honeypots where dissimulation is used to hide the real while simulation techniques allow the defense to create false beliefs for attackers. For example, fake computing environments are simulated as honeypots to lure the attackers by making them believe that they penetrated the defenses of the real computers, which are hidden or dissimulated. The goal of the DbD is to enforce certain false beliefs on attackers, so that they would choose a course of actions that will result in speedy detection or they will abandon of the attacks. In 2019, 10% of enterprises used deception tools and tactics.
Unfortunately, the DbD has a very limited use for securing computer systems unlike in the military science where the operation Fortitude is a textbook example of how military deception was employed by the Allied nations successfully in WWII. One of the main reasons for their limited use is the difficulty to apply DbD in an automated way – in military, deception strategies are ad-hoc and they are designed with the specific knowledge of the human adversary and the environment. Unlike military deception, DbD in cybersecurity deals with computer programs, e.g., apps whose malicious intent is difficult to infer from its instructions and it is not even clear if these programs behave as adversaries. This is why popular general techniques like honeypots and moving targets are used in computer networks to allow the intruder to interact with the computing environments to determine the intent of the intruder, and in many of these cases the intruder is a human operator. A goal of this session was to determine how DbD can be used in an automated way to secure cybersystems.
Location: Potomac IV
5. Formal Methods for Security
Co-Leads: Adam Chlipala (MIT), Deian Stefan (UCSD)
Scribes: Eunsuk Kang (CMU), Andrew Tolmach (Portland State)
Description: Formal methods have seen rapid software-industry adoption in the last 10 years, with interest especially from developers and maintainers of cryptography and distributed systems. The goal of this group was to understand better the best opportunities for cybersecurity impact in the next decade. One element is “matchmaking” from formal-methods techniques to applications in security and privacy. Another element is noticing appealing potential connections between different projects that already use formal methods. A final idea for a cross-cutting topic is best practices for promoting adoption of basic research in this area.
Location: Potomac V
6. Game Theory and Distributed System Security
Co-Leads: Saurabh Bagchi (Purdue University and KeyByte), Kevin Chan (Army Research Lab)
Scribes: Mustafa Abdallah (Purdue University) and Xing Gao (University of Delaware)
Description: There has been significant work in understanding vulnerabilities in large-scale distributed systems and putting technological patches to address specific classes of vulnerabilities. However, the works often lack understanding of the impact of cascading attacks or mitigation on the resilience of the overall system. Due to the large legacy nature of many distributed infrastructures and budgetary constraints, a complete re-architecting and strengthening of the system is often not possible. Rather, rational decisions need to be made to strengthen parts of the system, taking into account the risks and the interdependencies among the assets. While static game theory has been extensively studied for several decades, the large-scale distributed systems present critical challenges that preclude the direct application of existing theory. Specifically, there is a need for new techniques to account for both the interdependencies and the dynamical nature of the subsystems. Furthermore, some of these dynamical subsystems may be complex in their own right (e.g., a perception system that employs multi-modal sensors) and may only be represented by simulation models. Can the security community extend traditional game theory to develop tractable analysis and design techniques that can be applied to security of large-scale interdependent distributed systems? Another aspect where the community can learn is from behavioral economics where human biases are taken into account in decision making. Can that be incorporated into traditional game theory to understand the effect of biases on security decision making and possible mitigation actions?
Location: Potomac VI
7. Improving the Quality and Reuse of Cybersecurity Datasets, Software, and Other Artifacts
Co-Leads: Eric Eide (University of Utah), S. Jay Yang (RIT)
Scribes: David Balenson (SRI International), Laura Tinnel (SRI International)
Description: This breakout group focused on best practices and future approaches to increase and improve the quality and reuse of cybersecurity research datasets, software, and other artifacts. It explored issues, challenges, and opportunities to: better package and describe research artifacts, including their intended uses and limitations, and better share and find relevant cybersecurity research artifacts, as well as knowledge and experience about their use, in order to increase the effectiveness of artifacts and broaden their reuse. The group discussed current initiatives, infrastructure, and incentives for producing, sharing, and reusing high-quality artifacts. And, it explored and identified other potential solutions and opportunities to increase and improve the quality and reuse of high-quality artifacts as a solid practice in the cybersecurity research as well as practitioner communities.
Location: Conference Theater
8. Information Integrity
Co-Leads: Kate Starbird (University of Washington), Lisa Fazio (Vanderbilt University), Benjamin Mako Hill (University of Washington)
Scribe: Daniel Votipka (Tufts University)
Description: This breakout group explored the challenges of misinformation, disinformation, and other strategic manipulation of information spaces. We discussed how research in the area of “secure and trustworthy computing” might contribute to the interdisciplinary field emerging around issues of mis-, dis-, and malinformation — and other, related problematic behaviors — online.
Location: Arlington Room
9. NextG and Wireless Security
Co-Leads: Syed Rafiul Hussain (Penn State), Brad Reaves (NC State)
Scribes: Taqi Raza (U.of Arizona)
Description: Next-generation wireless systems, from 5G and beyond, promise unprecedented improvements in throughput, user experience, and security through significant changes in hardware, software, and the underlying wireless communication technologies. These wireless systems are essential infrastructure. However, 5G and beyond combine novel, complex, and often unstudied technologies with demonstrably insecure foundations inherited from prior wireless generations. This breakout determined research needs for securing next-generation wireless systems. Security of related technologies, such as LTE, 5G, cellular-WiFi coexistence, and edge platforms were also in the scope for this discussion.
Location: Washington A
10. Privacy, Policy, and People
Co-Leads: Shomir Wilson (Penn State), Athina Markopoulou (UC Irvine)
Scribes: Umar Iqbal (University of Washington), Rahmadi Trimananda (UC Irvine)
Description: Part of the SATC community works on developing privacy-enhancing technologies. Data protection and privacy laws (e.g. CCPA in California, GPDR in Europe) and government agencies (e.g. FTC) deal with similar problems from a policy and regulation perspective. There is currently a gap between the technology and the policy sides of privacy: laws need to be informed by technical expertise to be meaningful, and technical tools need to be developed to audit and enforce compliance with the laws. In addition, there is a gap between the privacy information that consumers receive and their ability to understand and act on this information, leading to the widely recognized failure of the “notice and choice” model of online privacy. In this breakout session, we sought to bring together researchers from both the technical and policy sides of privacy to explore interfaces that bridge these gaps. We also invited participants to discuss efforts to bring understandability, controllability, and machine assistance to privacy notices and settings on the internet. This includes NLP analysis of privacy policies, personalized privacy assistants, auditing of privacy controls and data practices, standards for communicating privacy information, and any related topics.
Location: Washington B
11. Security Education
Co-Leads: Michel Cukier (University of Maryland), Melissa Dark (Dark Enterprises, Inc.), Fan Wu (Tuskegee University)
Scribes: Sarah Elder (NC State), Hossain Shahriar (Kennesaw State University)
Description: This breakout session addressed two of the most important issues faced today and in the next ten years in cybersecurity education. These issues need to be deliberated because they have consequences for you, your students, your institutions, and the nation. Workforce composition: More people, more diverse, and more qualified? How can such workforce composition be achieved? What is happening and what needs to be created for such a workforce? Teaching content: What should we be teaching to what students? What will become obsolete? Which material is the most important to be taught?
Location: Tidewater II
12. Security for AI
Co-Leads: Alvaro Velasquez (Air Force Research Laboratory), Rickard Ewetz (University of Central Florida), Amro Awad (North Carolina State), Sumit Kumar Jha (University of Texas, San Antonio)
Scribes: Heena Rathore, (UT San Antonio), Steven Fernandes (Creighton University)
Description: As AI systems become prevalent and deeply embedded in our daily lives, their security is being challenged by new attack vectors that do not affect traditional software systems like device drivers and kernels. The manipulation of the data used to train AI systems, visibly indistinguishable digital perturbations of inputs inside AI systems, and physical perturbation of the input scenario being observed by AI are a few of the known methods that have been shown to successfully challenge AI systems. In this breakout session, we discussed the need for new theoretical tools, novel algorithms, as well as applicable data sets, benchmarks and software systems for protecting AI systems against attacks. We sought answers to the following questions together:
What is the security of AI? Why is this relevant to a society that is increasingly reliant on AI systems? How does this lead to a secure and trustworthy cyberspace?
Adversarial attacks in the digital space and in the physical real-world space have been reported in literature both during training time and during inference time. Defenses against such attacks are known but their benchmarking is difficult. What are the pointers to efficient attacks and successful defenses?
What are important challenges that remain? Is there sufficient connection within theoretical robustness results and applied practice in deep learning? Are there promising algorithms and theoretical tools that can have an impact on the security of AI? Are there sufficient software tools to create adversarial attacks or defenses from academic groups? What are the current promising directions to address the security of AI?
Location: Tidewater III
13. Security in a Post-Quantum World
Co-Leads: Edoardo Persichetti (Florida Atlantic University), Shi Bai (Florida Atlantic University), Jean-Francois Biasse (University of South Florida)
Description: The near-entirety of public-key cryptography in use today is based on classical problems such as factoring (RSA) or computing discrete logarithms (Diffie-Hellman, ECDSA etc.). While they constitute an excellent choice in many applications, and their security is well defined and understood, these systems are vulnerable to attacks performed by an adversary in possession of a quantum computer. Most prominently, Shor in 1994 introduced a polynomial-time algorithm capable of solving both of the problems, making all the current solutions obsolete once a quantum computer with enough computational power is available. Given the recent advances in quantum computing technology, it is plausible that such a scenario will become reality in a relatively short timespan, with a devastating impact on cybersecurity. It is therefore important to develop new schemes and establish alternative cryptographic standards whose security will not be vulnerable to attacks when this scenario becomes real.
The cryptographic community is hard at work in the area now known as Post-Quantum Cryptography (PQC), studying solutions for secure communication in the presence of adversaries equipped with sizable quantum computing power. The main tools include alternative problems coming primarily from the theory of lattices, coding theory, multivariate polynomials, isogenies on elliptic curves, and some other. The effort is bolstered by the recent call for Post-Quantum standardization, issued by NIST in 2017 and now approaching a first major turning point: with the end of Round 3, in fact, NIST expects a small set of algorithms to be ready for standardization. This does not conclude the effort, however, as other algorithms are still under evaluation (and will undergo a fourth round of scrutiny); moreover, NIST mentioned that the call will be partially reopened to allow for new signature algorithms, given that the situation with this type of primitive is not fully satisfactory.
Location: Capitol Room (Independence Level)
14. Software and Hardware Supply Chain Security
Co-Leads: Navid Asadi (University of Florida), Akond Rahman (Tennessee Tech), Laurie Williams (North Carolina State University)
Scribe: Tamzidul Hoque, University of Kansas
Description: Attackers are increasingly using the software and hardware supply chain as an attack vector. This breakout group focused on identifying similarities and differences between software and hardware supply chain security so the security community can share a common vocabulary, leverage the research in overlapping areas, and consider the convergence of software/hardware supply chain into a joint threat model. We also discussed techniques for detection and countermeasures for supply chain attacks.
Location: Plenary Room (Regency Ballroom)