Visible to the public Biblio

Filters: Keyword is Matrices  [Clear All Filters]
2021-02-22
Haile, J., Havens, S..  2020.  Identifying Ubiquitious Third-Party Libraries in Compiled Executables Using Annotated and Translated Disassembled Code with Supervised Machine Learning. 2020 IEEE Security and Privacy Workshops (SPW). :157–162.
The size and complexity of the software ecosystem is a major challenge for vendors, asset owners and cybersecurity professionals who need to understand the security posture of these systems. Annotated and Translated Disassembled Code is a graph based datastore designed to organize firmware and software analysis data across builds, packages and systems, providing a highly scalable platform enabling automated binary software analysis tasks including corpora construction and storage for machine learning. This paper describes an approach for the identification of ubiquitous third-party libraries in firmware and software using Annotated and Translated Disassembled Code and supervised machine learning. Annotated and Translated Disassembled Code provide matched libraries, function names and addresses of previously unidentified code in software as it is being automatically analyzed. This data can be ingested by other software analysis tools to improve accuracy and save time. Defenders can add the identified libraries to their vulnerability searches and add effective detection and mitigation into their operating environment.
2020-10-05
Zamani, Majid, Arcak, Murat.  2018.  Compositional Abstraction for Networks of Control Systems: A Dissipativity Approach. IEEE Transactions on Control of Network Systems. 5:1003—1015.

In this paper, we propose a compositional scheme for the construction of abstractions for networks of control systems by using the interconnection matrix and joint dissipativity-type properties of subsystems and their abstractions. In the proposed framework, the abstraction, itself a control system (possibly with a lower dimension), can be used as a substitution of the original system in the controller design process. Moreover, we provide a procedure for constructing abstractions of a class of nonlinear control systems by using the bounds on the slope of system nonlinearities. We illustrate the proposed results on a network of linear control systems by constructing its abstraction in a compositional way without requiring any condition on the number or gains of the subsystems. We use the abstraction as a substitute to synthesize a controller enforcing a certain linear temporal logic specification. This example particularly elucidates the effectiveness of dissipativity-type compositional reasoning for large-scale systems.

Kanellopoulos, Aris, Vamvoudakis, Kyriakos G., Gupta, Vijay.  2019.  Decentralized Verification for Dissipativity of Cascade Interconnected Systems. 2019 IEEE 58th Conference on Decision and Control (CDC). :3629—3634.

In this paper, we consider the problem of decentralized verification for large-scale cascade interconnections of linear subsystems such that dissipativity properties of the overall system are guaranteed with minimum knowledge of the dynamics. In order to achieve compositionality, we distribute the verification process among the individual subsystems, which utilize limited information received locally from their immediate neighbors. Furthermore, to obviate the need for full knowledge of the subsystem parameters, each decentralized verification rule employs a model-free learning structure; a reinforcement learning algorithm that allows for online evaluation of the appropriate storage function that can be used to verify dissipativity of the system up to that point. Finally, we show how the interconnection can be extended by adding learning-enabled subsystems while ensuring dissipativity.

2020-04-24
Ha, Dinh Truc, Retière, Nicolas, Caputo, Jean-Guy.  2019.  A New Metric to Quantify the Vulnerability of Power Grids. 2019 International Conference on System Science and Engineering (ICSSE). :206—213.
Major blackouts are due to cascading failures in power systems. These failures usually occur at vulnerable links of the network. To identify these, indicators have already been defined using complex network theory. However, most of these indicators only depend on the topology of the grid; they fail to detect the weak links. We introduce a new metric to identify the vulnerable lines, based on the load-flow equations and the grid geometry. Contrary to the topological indicators, ours is built from the electrical equations and considers the location and magnitude of the loads and of the power generators. We apply this new metric to the IEEE 118-bus system and compare its prediction of weak links to the ones given by an industrial software. The agreement is very well and shows that using our indicator a simple examination of the network and its generator and load distribution suffices to find the weak lines.
2017-12-04
Johnston, B., Lee, B., Angove, L., Rendell, A..  2017.  Embedded Accelerators for Scientific High-Performance Computing: An Energy Study of OpenCL Gaussian Elimination Workloads. 2017 46th International Conference on Parallel Processing Workshops (ICPPW). :59–68.

Energy efficient High-Performance Computing (HPC) is becoming increasingly important. Recent ventures into this space have introduced an unlikely candidate to achieve exascale scientific computing hardware with a small energy footprint. ARM processors and embedded GPU accelerators originally developed for energy efficiency in mobile devices, where battery life is critical, are being repurposed and deployed in the next generation of supercomputers. Unfortunately, the performance of executing scientific workloads on many of these devices is largely unknown, yet the bulk of computation required in high-performance supercomputers is scientific. We present an analysis of one such scientific code, in the form of Gaussian Elimination, and evaluate both execution time and energy used on a range of embedded accelerator SoCs. These include three ARM CPUs and two mobile GPUs. Understanding how these low power devices perform on scientific workloads will be critical in the selection of appropriate hardware for these supercomputers, for how can we estimate the performance of tens of thousands of these chips if the performance of one is largely unknown?