Biblio

Filters: Keyword is Scalability and Composability  [Clear All Filters]
2022-03-07
Vaidya, Ruturaj, Kulkarni, Prasad A., Jantz, Michael R..  2021.  Explore Capabilities and Effectiveness of Reverse Engineering Tools to Provide Memory Safety for Binary Programs. Information Security Practice and Experience. :11–31.
Any technique to ensure memory safety requires knowledge of (a) precise array bounds and (b) the data types accessed by memory load/store and pointer move instructions (called, owners) in the program. While this information can be effectively derived by compiler-level approaches much of this information may be lost during the compilation process and become unavailable to binary-level tools. In this work we conduct the first detailed study on how accurately can this information be extracted or reconstructed by current state-of-the-art static reverse engineering (RE) platforms for binaries compiled with and without debug symbol information. Furthermore, it is also unclear how the imprecision in array bounds and instruction owner information that is obtained by the RE tools impacts the ability of techniques to detect illegal memory accesses at run-time. We study this issue by designing, building, and deploying a novel binary-level technique to assess the properties and effectiveness of the information provided by the static RE algorithms in the first stage to guide the run-time instrumentation to detect illegal memory accesses in the decoupled second stage. Our work explores the limitations and challenges for static binary analysis tools to develop accurate binary-level techniques to detect memory errors.
2021-04-18
Adam Petz, Perry Alexander.  2021.  An Infrastructure for Faithful Execution of Remote Attestation Protocols. NASA Formal Methods Symposium (NFM’21).

Experience shows that even with a well-intentioned user at the keyboard, a motivated attacker can compromise a computer system at a layer below or adjacent to the shallow forms of authentication that are now accepted as commonplace[3]. Therefore, rather than asking "Can we trust the person behind the keyboard", a still better question might be: "Can we trust the computer system underneath?". An emerging technology for gaining trust in a remote computing system is remote attestation. Remote attestation is the activity of making a claim about properties of a target by supplying evidence to an appraiser over a network[2]. Although many existing approaches to remote attestation wisely adopt a layered architecture-where the bottom layers measure layers above-the dependencies between components remain static and measurement orderings fixed. For modern computing environments with diverse topologies, we can no longer fix a target architecture any more than we can fix a protocol to measure that architecture.Copland [1] is a domain-specific language and formal framework that provides a vocabulary for specifying the goals of layered attestation protocols. It also provides a reference semantics that characterizes system measurement events and evidence handling; a foundation for comparing protocol alternatives. The aim of this work is to refine the Copland semantics to a more fine-grained notion of attestation manager execution-a high-privilege thread of control responsible for invoking attestation services and bundling evidence results. This refinement consists of two cooperating components called the Copland Compiler and the Attestation Virtual Machine (AVM). The Copland Compiler translates a Copland protocol description into a sequence of primitive attestation instructions to be executed in the AVM. When considered in combination with advances in virtualization, trusted hardware, and high-assurance system software components-like compilers, file-systems, and OS kernels-a formally verified remote attestation infrastructure creates exciting opportunities for building system-level security arguments.

2019-10-10
Joel Reardon, Álvaro Feal, Primal Wijesekera, Amit Elazari Bar On, Narseo Vallina-Rodriguez, Serge Egelman.  2019.  50 Ways to Leak Your Data: An Exploration of Apps’ Circumvention of the Android Permissions System. 28th USENIX Security Symposium (USENIX Security 19). :603–620.

Modern smartphone platforms implement permission-based models to protect access to sensitive data and system resources. However, apps can circumvent the permission model and gain access to protected data without user consent by using both covert and side channels. Side channels present in the implementation of the permission system allow apps to access protected data and system resources without permission; whereas covert channels enable communication between two colluding apps so that one app can share its permission-protected data with another app lacking those permissions. Both pose threats to user privacy.

In this work, we make use of our infrastructure that runs hundreds of thousands of apps in an instrumented environment. This testing environment includes mechanisms to monitor apps' runtime behaviour and network traffic. We look for evidence of side and covert channels being used in practice by searching for sensitive data being sent over the network for which the sending app did not have permissions to access it. We then reverse engineer the apps and third-party libraries responsible for this behaviour to determine how the unauthorized access occurred. We also use software fingerprinting methods to measure the static prevalence of the technique that we discover among other apps in our corpus.

Using this testing environment and method, we uncovered a number of side and covert channels in active use by hundreds of popular apps and third-party SDKs to obtain unauthorized access to both unique identifiers as well as geolocation data. We have responsibly disclosed our findings to Google and have received a bug bounty for our work.

Alisa Frik, Leysan Nurgalieva, Julia Bernd, Joyce Lee, Florian Schaub, Serge Egelman.  2019.  Privacy and Security Threat Models and Mitigation Strategies of Older Adults. Fifteenth Symposium on Usable Privacy and Security (SOUPS 2019). :21–40.

Older adults (65+) are becoming primary users of emerging smart systems, especially in health care. However, these technologies are often not designed for older users and can pose serious privacy and security concerns due to their novelty, complexity, and propensity to collect and communicate vast amounts of sensitive information. Efforts to address such concerns must build on an in-depth understanding of older adults' perceptions and preferences about data privacy and security for these technologies, and accounting for variance in physical and cognitive abilities. In semi-structured interviews with 46 older adults, we identified a range of complex privacy and security attitudes and needs specific to this population, along with common threat models, misconceptions, and mitigation strategies. Our work adds depth to current models of how older adults' limited technical knowledge, experience, and age-related declines in ability amplify vulnerability to certain risks; we found that health, living situation, and finances play a notable role as well. We also found that older adults often experience usability issues or technical uncertainties in mitigating those risks -- and that managing privacy and security concerns frequently consists of limiting or avoiding technology use. We recommend educational approaches and usable technical protections that build on seniors' preferences.

2020-02-05
Nathan Malkin, Serge Egelman, David Wagner.  2019.  Privacy Controls for Always-Listening Devices. New Security Paradigms Workshop (NSPW).

Intelligent voice assistants (IVAs) and other voice-enabled devices already form an integral component of the Internet of Things and will continue to grow in popularity. As their capabilities evolve, they will move beyond relying on the wake-words today’s IVAs use, engaging instead in continuous listening. Though potentially useful, the continuous recording and analysis of speech can pose a serious threat to individuals’ privacy. Ideally, users would be able to limit or control the types of information such devices have access to. But existing technical approaches are insufficient for enforcing any such restrictions. To begin formulating a solution, we develop a system- atic methodology for studying continuous-listening applications and survey architectural approaches to designing a system that enhances privacy while preserving the benefits of always-listening assistants.

2019-07-11
Yan Shvartzshnaider, Zvonimir Pavlinovic, Ananth Balashankar, Thomas Wies, Lakshminarayanan Subramanian, Helen Nissenbaum, Prateek Mittal.  2019.  VACCINE: Using Contextual Integrity For Data Leakage Detection. The World Wide Web Conference. :1702–1712.

Modern enterprises rely on Data Leakage Prevention (DLP) systems to enforce privacy policies that prevent unintentional flow of sensitive information to unauthorized entities. However, these systems operate based on rule sets that are limited to syntactic analysis and therefore completely ignore the semantic relationships between participants involved in the information exchanges. For similar reasons, these systems cannot enforce complex privacy policies that require temporal reasoning about events that have previously occurred. 

To address these limitations, we advocate a new design methodology for DLP systems centered on the notion of Contextual Integrity (CI). We use the CI framework to abstract real-world communication exchanges into formally defined information flows where privacy policies describe sequences of admissible flows. CI allows us to decouple (1) the syntactic extraction of flows from information exchanges, and (2) the enforcement of privacy policies on these flows. We applied this approach to built VACCINE, a DLP auditing system for emails. VACCINE uses state-of-the-art techniques in natural language processing to extract flows from email text. It also provides a declarative language for describing privacy policies. These policies are automatically compiled to operational rules that the system uses for detecting data leakages. We evaluated VACCINE on the Enron email corpus and show that it improves over the state of the art both in terms of the expressivity of the policies that DLP systems can enforce as well as its precision in detecting data leakages.

2020-10-30
David Fridovich-Keil, Andrea Bajcsy, Jaime Fisac, Sylvia Herbert, Steven Wang, Anca Dragan, Claire J. Tomlin.  2019.  Confidence-aware motion prediction for real-time collision avoidance. The International Journal of Robotics Research. 39(2-3):250-265.

One of the most difficult challenges in robot motion planning is to account for the behavior of other moving agents, such as humans. Commonly, practitioners employ predictive models to reason about where other agents are going to move. Though there has been much recent work in building predictive models, no model is ever perfect: an agent can always move unexpectedly, in a way that is not predicted or not assigned sufficient probability. In such cases, the robot may plan trajectories that appear safe but, in fact, lead to collision. Rather than trust a model’s predictions blindly, we propose that the robot should use the model’s current predictive accuracy to inform the degree of confidence in its future predictions. This model confidence inference allows us to generate probabilistic motion predictions that exploit modeled structure when the structure successfully explains human motion, and degrade gracefully whenever the human moves unexpectedly. We accomplish this by maintaining a Bayesian belief over a single parameter that governs the variance of our human motion model. We couple this prediction algorithm with a recently proposed robust motion planner and controller to guide the construction of robot trajectories that are, to a good approximation, collision-free with a high, user-specified probability. We provide extensive analysis of the combined approach and its overall safety properties by establishing a connection to reachability analysis, and conclude with a hardware demonstration in which a small quadcopter operates safely in the same space as a human pedestrian.

2018-07-09
Anirudh Narasimman, Qiaozhi Wang, Fengjun Li, Dongwon Lee, Bo Luo.  2019.  Arcana: Enabling Private Posts on Public Microblog Platforms. 34rd International Information Security and Privacy Conference (IFIP SEC).

Many popular online social networks, such as Twitter, Tum-blr, and Sina Weibo, adopt too simple privacy models to satisfy users’diverse needs for privacy protection. In platforms with no (i.e., completely open) or binary (i.e., “public” and “friends-only”) access con-trol, users cannot control the dissemination boundary of the contentthey share. For instance, on Twitter, tweets in “public” accounts areaccessible to everyone including search engines, while tweets in “pro-tected” accounts are visible toallthe followers. In this work, we presentArcanato  enable  fine-grained access control for social network content sharing. In particular, we target the Twitter platform and intro-duce the “private tweet” function, which allows users to disseminateparticular tweets to designated group(s) of followers. Arcana employsCiphertext-Policy Attribute-based Encryption (CP-ABE) to implement social circle detection and private tweet encryption so that  access-controlled  tweets  are  only  readable  by  designated  recipients.  To  bestealthy, Arcana further embeds the protected content as digital water-marks in image tweets. We have implemented the Arcana prototype asa Chrome browser plug-in, and demonstrated its flexibility and effec-tiveness. Different from existing approaches that require trusted third-parties or additional server/broker/mediator, Arcana is light-weight andcompletely transparent to Twitter – all the communications, includingkey distribution and private tweet dissemination, are exchanged as Twit-ter messages. Therefore, with small API modifications, Arcana could beeasily ported to other online social networking platforms to support fine-grained access control.

2019-04-15
Petz, Adam, Alexander, Perry.  2019.  A Copland Attestation Manager. Hot Topics in Science of Security (HoTSoS'19).
John Ramsdell, Paul Rowe, Perry Alexander, Sarah Helble, Peter Loscocco, J. Aaron Pendergrass, Adam Petz.  2019.  Orchestrating Layered Attestations. Principles of Security and Trust (POST’19). 11426:197-221.

We present Copland, a language for specifying layered attestations. Layered attestations provide a remote appraiser with structured evidence of the integrity of a target system to support a trust decision. The language is designed to bridge the gap between formal analysis of attestation security guarantees and concrete implementations. We therefore provide two semantic interpretations of terms in our language. The first is a denotational semantics in terms of partially ordered sets of events. This directly connects Copland to prior work on layered attestation. The second is an operational semantics detailing how the data and control flow are executed. This gives explicit implementation guidance for attestation frameworks. We show a formal connection between the two semantics ensuring that any execution according to the operational semantics is consistent with the denotational event semantics. This ensures that formal guarantees resulting from analyzing the event semantics will hold for executions respecting the operational semantics. All results have been formally verified with the Coq proof assistant.

2020-03-31
Madiha Tabassum, Tomasz Kosiundefinedski, Alisa Frik, Nathan Malkin, Primal Wijesekera, Serge Egelman, Heather Lipford.  2019.  Investigating Users’ Preferences and Expectations for Always-Listening Voice Assistants. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.. 3(4):23.

Many consumers now rely on different forms of voice assistants, both stand-alone devices and those built into smartphones. Currently, these systems react to specific wake-words, such as "Alexa," "Siri," or "Ok Google." However, with advancements in natural language processing, the next generation of voice assistants could instead always listen to the acoustic environment and proactively provide services and recommendations based on conversations without being explicitly invoked. We refer to such devices as "always listening voice assistants" and explore expectations around their potential use. In this paper, we report on a 178-participant survey investigating the potential services people anticipate from such a device and how they feel about sharing their data for these purposes. Our findings reveal that participants can anticipate a wide range of services pertaining to a conversation; however, most of the services are very similar to those that existing voice assistants currently provide with explicit commands. Participants are more likely to consent to share a conversation when they do not find it sensitive, they are comfortable with the service and find it beneficial, and when they already own a stand-alone voice assistant. Based on our findings we discuss the privacy challenges in designing an always-listening voice assistant.

Wijesekera, Primal.  2018.  Contextual permission models for better privacy protection. Electronic Theses and Dissertations (ETDs) 2008+.

Despite corporate cyber intrusions attracting all the attention, privacy breaches that we, as ordinary users, should be worried about occur every day without any scrutiny. Smartphones, a household item, have inadvertently become a major enabler of privacy breaches. Smartphone platforms use permission systems to regulate access to sensitive resources. These permission systems, however, lack the ability to understand users’ privacy expectations leaving a significant gap between how permission models behave and how users would want the platform to protect their sensitive data. This dissertation provides an in-depth analysis of how users make privacy decisions in the context of Smartphones and how platforms can accommodate user’s privacy requirements systematically. We first performed a 36-person field study to quantify how often applications access protected resources when users are not expecting it. We found that when the application requesting the permission is running invisibly to the user, they are more likely to deny applications access to protected resources. At least 80% of our participants would have preferred to prevent at least one permission request. To explore the feasibility of predicting user’s privacy decisions based on their past decisions, we performed a longitudinal 131-person field study. Based on the data, we built a classifier to make privacy decisions on the user’s behalf by detecting when the context has changed and inferring privacy preferences based on the user’s past decisions. We showed that our approach can accurately predict users’ privacy decisions 96.8% of the time, which is an 80% reduction in error rate compared to current systems. Based on these findings, we developed a custom Android version with a contextually aware permission model. The new model guards resources based on user’s past decisions under similar contextual circumstances. We performed a 38-person field study to measure the efficiency and usability of the new permission model. Based on exit interviews and 5M data points, we found that the new system is effective in reducing the potential violations by 75%. Despite being significantly more restrictive over the default permission systems, participants did not find the new model to cause any usability issues in terms of application functionality.

Reyes, Irwin, Wijesekera, Primal, Reardon, Joel, Elazari, Amit, Razaghpanah, Abbas, Vallina-Rodriguez, Narseo, Egelman, Serge.  2018.  “Won’t Somebody Think of the Children?” Examining COPPA Compliance at Scale Proceedings on Privacy Enhancing Technologies. 2018:63-83.

We present a scalable dynamic analysis framework that allows for the automatic evaluation of the privacy behaviors of Android apps. We use our system to analyze mobile apps’ compliance with the Children’s Online Privacy Protection Act (COPPA), one of the few stringent privacy laws in the U.S. Based on our automated analysis of 5,855 of the most popular free children’s apps, we found that a majority are potentially in violation of COPPA, mainly due to their use of thirdparty SDKs. While many of these SDKs offer configuration options to respect COPPA by disabling tracking and behavioral advertising, our data suggest that a majority of apps either do not make use of these options or incorrectly propagate them across mediation SDKs. Worse, we observed that 19% of children’s apps collect identifiers or other personally identifiable information (PII) via SDKs whose terms of service outright prohibit their use in child-directed apps. Finally, we show that efforts by Google to limit tracking through the use of a resettable advertising ID have had little success: of the 3,454 apps that share the resettable ID with advertisers, 66% transmit other, non-resettable, persistent identifiers as well, negating any intended privacy-preserving properties of the advertising ID.

2019-01-24
Paulette Koronkevich.  2018.  Obsidian in the Rough: A Case Study Evaluation of a New Blockchain Programming Language. The ACM SIGPLAN conference on Systems, Programming, Languages and Applications: Software for Humanity (SPLASH).

Blockchains are one solution for secure distributed interaction, but security vulnerabilities have already been exposed in existing programs. Obsidian, a new blockchain programming language, seeks to prevent some of these vulnerabilities using typestate and linearity. We evaluate the current design of Obsidian by implementing a blockchain application for parametric insurance as a case study. We compare this implementation to one written in Solidity, and find that Obsidian can provide stronger safety guarantees.

Michael Coblenz, Jonathan Aldrich, Bradley Myers, Joshua Sunshine.  2018.  Interdisciplinary programming language design. Onward! 2018 Proceedings of the 2018 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software.

Approaches for programming language design used commonly in the research community today center around theoretical and performance-oriented evaluation. Recently, researchers have been considering more approaches to language design, including the use of quantitative and qualitative user studies that examine how different designs might affect programmers. In this paper, we argue for an interdisciplinary approach that incorporates many different methods in the creation and evaluation of programming languages. We argue that the addition of user-oriented design techniques can be helpful at many different stages in the programming language design process.

2019-01-10
Christopher Hannon, Illinois Institute of Technology, Nandakishore Santhi, Los Alamos National Laboratory, Stephan Eidenbenz, Los Alamos National Laboratory, Jason Liu, Florida International University, Dong Jin, Illinois Institute of Technology.  2018.  Just-In-Time Parallel Simulation. 2018 Winter Simulation Conference (WSC).

Due to the evolution of programming languages, interpreted languages have gained widespread use in scientific and research computing. Interpreted languages excel at being portable, easy to use, and fast in prototyping than their ahead-of-time (AOT) counterparts, including C, C++, and Fortran. While traditionally considered as slow to execute, advancements in Just-in-Time (JIT) compilation techniques have significantly improved the execution speed of interpreted languages and in some cases outperformed AOT languages. In this paper, we explore some challenges and design strategies in developing a high performance parallel discrete event simulation engine, called Simian, written with interpreted languages with JIT capabilities, including Python, Lua, and Javascript. Our results show that Simian with JIT performs similarly to AOT simulators, such as MiniSSF and ROSS. We expect that with features like good performance, userfriendliness, and portability, the just-in-time parallel simulation will become a common choice for modeling and simulation in the near future.
 

2018-07-13
Yangfend Qu, Illinois Institute of Technology, Xin Liu, Illinois Institute of Technology, Dong Jin, Illinois Institute of Technology, Yuan Hong, Illinois Institute of Technology, Chen Chen, Argonne National Laboratory.  2018.  Enabling a Resilient and Self-healing PMU Infrastructure Using Centralized Network Control. 2018 ACM International Workshop on Security in Software Defined Networks & Network Function Virtualization.

Many of the emerging wide-area monitoring protection and control (WAMPAC) applications in modern electrical grids rely heavily on the availability and integrity of widespread phasor measurement unit (PMU) data. Therefore, it is critical to protect PMU networks against growing cyber-attacks and system faults. In this paper, we present a self-healing PMU network design that considers both power system observability and communication network characteristics. Our design utilizes centralized network control, such as the emerging software-defined networking (SDN) technology, to design resilient network self-healing algorithms against cyber-attacks. Upon detection of a cyber-attack, the PMU network can reconfigure itself to isolate compromised devices and re-route measurement
data with the goal of preserving the power system observability. We have developed a proof-of-concept system in a container-based network testbed using integer linear programming to solve a graphbased PMU system model.We also evaluate the system performance regarding the self-healing plan generation and installation using the IEEE 30-bus system.
 

2020-03-31
2017-07-11
Mahmoud Hammad, Hamid Bagheri, Sam Malek.  2017.  DELDroid: Determination and Enforcement of Least-Privilege Architecture in Android. 2017 IEEE International Conference on Software Architecture.

Modern mobile platforms rely on a permission model to guard the system's resources and apps. In Android, since the permissions are granted at the granularity of apps, and all components belonging to an app inherit those permissions, an app's components are typically over-privileged, i.e., components are granted more privileges than they need to complete their tasks. Systematic violation of least-privilege principle in Android has shown to be the root cause of many security vulnerabilities. To mitigate this issue, we have developed DELDROID, an automated system for determination of least privilege architecture in Android and its enforcement at runtime. A key contribution of our approach is the ability to limit the privileges granted to apps without the need to modify them. DELDROID utilizes static program analysis techniques to extract the exact privileges each component needs for providing its functionality. A Multiple-Domain Matrix representation of the system's architecture is then used to automatically analyze the security posture of the system and derive its least-privilege architecture. Our experiments on hundreds of real world apps corroborate DELDROID's ability in effectively establishing the least-privilege architecture and its benefits in alleviating the security threats.

Tingting Yu, Witawas Srisa-an, Gregg Rothermel.  2017.  An automated framework to support testing for process-level race conditions. Software: Testing, Verification, and Reliability .

Race conditions are difficult to detect because they usually occur only under specific execution interleavings. Numerous program analysis and testing techniques have been proposed to detect race conditions between threads on single applications. However, most of these techniques neglect races that occur at the process level due to complex system event interactions. This article presents a framework, SIMEXPLORER, that allows engineers to effectively test for process-level race conditions. SIMEXPLORER first uses dynamic analysis techniques to observe system execution, identify program locations of interest, and report faults related to oracles. Next, it uses virtualization to achieve the fine-grained controllability needed to exercise event interleavings that are likely to expose races. We evaluated the effectiveness of SIMEXPLORER on 24 real-world applications containing both known and unknown process-level race conditions. Our results show that SIMEXPLORER is effective at detecting these race conditions, while incurring an overhead that is acceptable given its effectiveness improvements.

Junjie Qian, Hong Jiang, Witawas Srisa-an, Sharad Seth.  2017.  Energy-efficient I/O Thread Schedulers for NVMe SSDs on NUMA. CCGrid '17 Proceedings of the 17th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing.

Non-volatile memory express (NVMe) based SSDs and the NUMA platform are widely adopted in servers to achieve faster storage speed and more powerful processing capability. As of now, very little research has been conducted to investigate the performance and energy efficiency of the stateof-the-art NUMA architecture integrated with NVMe SSDs, an emerging technology used to host parallel I/O threads. As this technology continues to be widely developed and adopted, we need to understand the runtime behaviors of such systems in order to design software runtime systems that deliver optimal performance while consuming only the necessary amount of energy. This paper characterizes the runtime behaviors of a Linuxbased NUMA system employing multiple NVMe SSDs. Our comprehensive performance and energy-efficiency study using massive numbers of parallel I/O threads shows that the penalty due to CPU contention is much smaller than that due to remote access of NVMe SSDs. Based on this insight, we develop a dynamic “lesser evil” algorithm called ESN, to minimize the impact of these two types of penalties. ESN is an energyefficient profiling-based I/O thread scheduler for managing I/O threads accessing NVMe SSDs on NUMA systems. Our empirical evaluation shows that ESN can achieve optimal I/O throughput and latency while consuming up to 50% less energy and using fewer CPUs.

2017-07-12
Gabriel Ferreira.  2017.  Software certification in practice: how are standards being applied? ICSE-C '17 Proceedings of the 39th International Conference on Software Engineering Companion.

Certification schemes exist to regulate software systems and prevent them from being deployed before they are judged fit to use. However, practitioners are often unsatisfied with the efficiency of certification standards and processes. In this study, we analyzed two certification standards, Common Criteria and DO-178C, and collected insights from literature and from interviews with subject-matter experts to identify concepts affecting the efficiency of certification processes. Our results show that evaluation time, reusability of evaluation artifacts, and composition of systems and certified artifacts are barriers to achieve efficient certification.

2017-07-11
Cyrus Omar, Ian Voysey, Michael Hilton, Joshua Sunshine, Claire Le Goues, Jonathan Aldrich, Matthew Hammer.  2017.  Toward Semantic Foundations for Program Editors. 2nd Summit on Advances in Programming Languages (SNAPL 2017).

Programming language definitions assign formal meaning to complete programs. Programmers, however, spend a substantial amount of time interacting with incomplete programs -- programs with holes, type inconsistencies and binding inconsistencies -- using tools like program editors and live programming environments (which interleave editing and evaluation). Semanticists have done comparatively little to formally characterize (1) the static and dynamic semantics of incomplete programs; (2) the actions available to programmers as they edit and inspect incomplete programs; and (3) the behavior of editor services that suggest likely edit actions to the programmer based on semantic information extracted from the incomplete program being edited, and from programs that the system has encountered in the past. As such, each tool designer has largely been left to develop their own ad hoc heuristics. 
This paper serves as a vision statement for a research program that seeks to develop these "missing" semantic foundations. Our hope is that these contributions, which will take the form of a series of simple formal calculi equipped with a tractable metatheory, will guide the design of a variety of current and future interactive programming tools, much as various lambda calculi have guided modern language designs. Our own research will apply these principles in the design of Hazel, an experimental live lab notebook programming environment designed for data science tasks. We plan to co-design the Hazel language with the editor so that we can explore concepts such as edit-time semantic conflict resolution mechanisms and mechanisms that allow library providers to install library-specific editor services.

Darya Melicher(Kurilova), Yangqingwei Shi, Alex Potanin, Jonathan Aldrich.  2017.  A Capability-Based Module System for Authority Control. European Conference on Object-Oriented Programming (ECOOP).

The principle of least authority states that each component of the system should be given authority to access only the information and resources that it needs for its operation. This principle is fundamental to the secure design of software systems, as it helps to limit an application’s attack surface and to isolate vulnerabilities and faults. Unfortunately, current programming languages do not provide adequate help in controlling the authority of application modules, an issue that is particularly acute in the case of untrusted third-party extensions. In this paper, we present a language design that facilitates controlling the authority granted to each application module. The key technical novelty of our approach is that modules are firstclass, statically typed capabilities. First-class modules are essentially objects, and so we formalize our module system by translation into an object calculus and prove that the core calculus is typesafe and authority-safe. Unlike prior formalizations, our work defines authority non-transitively, allowing engineers to reason about software designs that use wrappers to provide an attenuated version of a more powerful capability. Our approach allows developers to determine a module’s authority by examining the capabilities passed as module arguments when the module is created, or delegated to the module later during execution. The type system facilitates this by identifying which objects provide capabilities to sensitive resources, and by enabling security architects to examine the capabilities passed into and out of a module based only on the module’s interface, without needing to examine the module’s implementation code. An implementation of the module system and illustrative examples in the Wyvern programming language suggest that our approach can be a practical way to control module authority.