Visible to the public Biblio

Filters: Author is Paulsen, Brandon  [Clear All Filters]
2021-05-03
Paulsen, Brandon, Wang, Jingbo, Wang, Jiawei, Wang, Chao.  2020.  NEURODIFF: Scalable Differential Verification of Neural Networks using Fine-Grained Approximation. 2020 35th IEEE/ACM International Conference on Automated Software Engineering (ASE). :784–796.
As neural networks make their way into safety-critical systems, where misbehavior can lead to catastrophes, there is a growing interest in certifying the equivalence of two structurally similar neural networks - a problem known as differential verification. For example, compression techniques are often used in practice for deploying trained neural networks on computationally- and energy-constrained devices, which raises the question of how faithfully the compressed network mimics the original network. Unfortunately, existing methods either focus on verifying a single network or rely on loose approximations to prove the equivalence of two networks. Due to overly conservative approximation, differential verification lacks scalability in terms of both accuracy and computational cost. To overcome these problems, we propose NEURODIFF, a symbolic and fine-grained approximation technique that drastically increases the accuracy of differential verification on feed-forward ReLU networks while achieving many orders-of-magnitude speedup. NEURODIFF has two key contributions. The first one is new convex approximations that more accurately bound the difference of two networks under all possible inputs. The second one is judicious use of symbolic variables to represent neurons whose difference bounds have accumulated significant error. We find that these two techniques are complementary, i.e., when combined, the benefit is greater than the sum of their individual benefits. We have evaluated NEURODIFF on a variety of differential verification tasks. Our results show that NEURODIFF is up to 1000X faster and 5X more accurate than the state-of-the-art tool.
2017-05-18
Brooks, Andrew, Krebs, Laura, Paulsen, Brandon.  2016.  A Comparison of Sorting Times Between Java 8 and Parallel Colt: An Exploratory Experiment. SIGSOFT Softw. Eng. Notes. 41:1–5.

An exploratory experiment found that sorting arrays of random integers using Java 8's parallel sort required only 50%-70% of the time taken using the parallel sort of the Parallel Colt library. Factors considered responsible for the performance advantage include the use of a dual-pivot quicksort on locally held data at certain phases of execution and work-stealing by threads, a feature of the fork-join framework. The default performance of Parallel Colt's parallel sort was found to degrade dramatically for small array sizes due to unnecessary thread creation.