Visible to the public Biblio

Filters: Keyword is Functional programming  [Clear All Filters]
2021-01-11
Lobo-Vesga, E., Russo, A., Gaboardi, M..  2020.  A Programming Framework for Differential Privacy with Accuracy Concentration Bounds. 2020 IEEE Symposium on Security and Privacy (SP). :411–428.
Differential privacy offers a formal framework for reasoning about privacy and accuracy of computations on private data. It also offers a rich set of building blocks for constructing private data analyses. When carefully calibrated, these analyses simultaneously guarantee the privacy of the individuals contributing their data, and the accuracy of the data analyses results, inferring useful properties about the population. The compositional nature of differential privacy has motivated the design and implementation of several programming languages aimed at helping a data analyst in programming differentially private analyses. However, most of the programming languages for differential privacy proposed so far provide support for reasoning about privacy but not for reasoning about the accuracy of data analyses. To overcome this limitation, in this work we present DPella, a programming framework providing data analysts with support for reasoning about privacy, accuracy and their trade-offs. The distinguishing feature of DPella is a novel component which statically tracks the accuracy of different data analyses. In order to make tighter accuracy estimations, this component leverages taint analysis for automatically inferring statistical independence of the different noise quantities added for guaranteeing privacy. We evaluate our approach by implementing several classical queries from the literature and showing how data analysts can figure out the best manner to calibrate privacy to meet the accuracy requirements.
2020-07-09
Ashouri, Mohammadreza.  2019.  Detecting Input Sanitization Errors in Scala. 2019 Seventh International Symposium on Computing and Networking Workshops (CANDARW). :313—319.

Scala programming language combines object-oriented and functional programming in one concise, high-level language, and the language supports static types that help to avoid bugs in complex programs. This paper proposes a dynamic taint analyzer called ScalaTaint for Scala applications. The analyzer traces the propagation of malicious inputs from untrusted sources to sensitive sink methods in programs that can be exploited by adversaries. In this work, we evaluated the accuracy of ScalaTaint with a security benchmark suite including 7 projects in Scala. As a result, our analyzer could report 49 vulnerabilities within 753,372 lines of code. Moreover, the result of our performance measurement on ScalaBench shows 67% runtime overhead that demonstrates the usefulness and efficiently of our technique in comparison with similar tools.

2020-01-27
Pamparà, Gary, Engelbrecht, Andries P..  2019.  Evolutionary and swarm-intelligence algorithms through monadic composition. Proceedings of the Genetic and Evolutionary Computation Conference Companion. :1382–1390.
Reproducible experimental work is a vital part of the scientific method. It is a concern that is often, however, overlooked in modern computational intelligence research. Scientific research within the areas of programming language theory and mathematics have made advances that are directly applicable to the research areas of evolutionary and swarm intelligence. Through the use of functional programming and the established abstractions that functional programming provides, it is possible to define the elements of evolutionary and swarm intelligence algorithms as compositional computations. These compositional blocks then compose together to allow the declarations of an algorithm, whilst considering the declaration as a "sub-program". These sub-programs may then be executed at a later time and provide the blueprints of the computation. Storing experimental results within a robust data-set file format, which is widely supported by analysis tools, provides additional flexibility and allows different analysis tools to access datasets in the same efficient manner. This paper presents an open-source software library for evolutionary and swarm-intelligence algorithms which allows the type-safe, compositional, monadic and functional declaration of algorithms while tracking and managing effects (e.g. usage of a random number generator) that directly influences the execution of an algorithm.
2018-08-23
Vassena, M., Breitner, J., Russo, A..  2017.  Securing Concurrent Lazy Programs Against Information Leakage. 2017 IEEE 30th Computer Security Foundations Symposium (CSF). :37–52.
Many state-of-the-art information-flow control (IFC) tools are implemented as Haskell libraries. A distinctive feature of this language is lazy evaluation. In his influencal paper on why functional programming matters, John Hughes proclaims:,,Lazy evaluation is perhaps the most powerful tool for modularization in the functional programmer's repertoire.,,Unfortunately, lazy evaluation makes IFC libraries vulnerable to leaks via the internal timing covert channel. The problem arises due to sharing, the distinguishing feature of lazy evaluation, which ensures that results of evaluated terms are stored for subsequent re-utilization. In this sense, the evaluation of a term in a high context represents a side-effect that eludes the security mechanisms of the libraries. A naïve approach to prevent that consists in forcing the evaluation of terms before entering a high context. However, this is not always possible in lazy languages, where terms often denote infinite data structures. Instead, we propose a new language primitive, lazyDup, which duplicates terms lazily. By using lazyDup to duplicate terms manipulated in high contexts, we make the security library MAC robust against internal timing leaks via lazy evaluation. We show that well-typed programs satisfy progress-sensitive non-interference in our lazy calculus with non-strict references. Our security guarantees are supported by mechanized proofs in the Agda proof assistant.
2017-03-07
Inoue, Jun, Kiselyov, Oleg, Kameyama, Yukiyoshi.  2016.  Staging Beyond Terms: Prospects and Challenges. Proceedings of the 2016 ACM SIGPLAN Workshop on Partial Evaluation and Program Manipulation. :103–108.

Staging is a program generation paradigm with a clean, well-investigated semantics which statically ensures that the generated code is always well-typed and well-scoped. Staging is often used for specializing programs to the known properties or parts of data to improve efficiency, but so far it has been limited to generating terms. This short paper describes our ongoing work on extending staging, with its strong safety guarantees, to generation of non-terms, focusing on ML-style modules. The purpose is to map out the promises and challenges, then to pose a question to solicit the community's expertise in evaluating how essential our extensions are for the purpose of applying staging beyond the realm of terms. We demonstrate our extensions' use in specializing functor applications to eliminate its (currently large) overhead in OCaml. We explain the challenges that those extensions bring in and identify a promising line of attack. Unexpectedly, however, it turns out that we can avoid module generation altogether by representing modules, possibly containing abstract types, as polymorphic records. With the help of first-class modules, module specialization reduces to ordinary term specialization, which can be done with conventional staging. The extent to which this hack generalizes is unclear. Thus we have a question to the community: is there a compelling use case for module generation? With these insights and questions, we offer a starting point for a long-term program in the next stage of staging research.

Thibodeau, David, Cave, Andrew, Pientka, Brigitte.  2016.  Indexed Codata Types. Proceedings of the 21st ACM SIGPLAN International Conference on Functional Programming. :351–363.

Indexed data types allow us to specify and verify many interesting invariants about finite data in a general purpose programming language. In this paper we investigate the dual idea: indexed codata types, which allow us to describe data-dependencies about infinite data structures. Unlike finite data which is defined by constructors, we define infinite data by observations. Dual to pattern matching on indexed data which may refine the type indices, we define copattern matching on indexed codata where type indices guard observations we can make. Our key technical contributions are three-fold: first, we extend Levy's call-by-push value language with support for indexed (co)data and deep (co)pattern matching; second, we provide a clean foundation for dependent (co)pattern matching using equality constraints; third, we describe a small-step semantics using a continuation-based abstract machine, define coverage for indexed (co)patterns, and prove type safety. This is an important step towards building a foundation where (co)data type definitions and dependent types can coexist.