Biblio
Autonomous vehicles (AVs) are capable of making driving decisions autonomously using multiple sensors and a complex autonomous driving (AD) software. However, AVs introduce numerous unique security challenges that have the potential to create safety consequences on the road. Security mechanisms require a benchmark suite and an evaluation framework to generate comparable results. Unfortunately, AVs lack a proper benchmarking framework to evaluate the attack and defense mechanisms and quantify the safety measures. This paper introduces BenchAV – a security benchmark suite and evaluation framework for AVs to address current limitations and pressing challenges of AD security. The benchmark suite contains 12 security and performance metrics, and an evaluation framework that automates the metric collection process using Carla simulator and Robot Operating System (ROS).
Microservice architectures are steadily gaining adoption in industrial practice. At the same time, performance and resilience are important properties that need to be ensured. Even though approaches for performance and resilience have been developed (e.g., for anomaly detection and fault tolerance), there are no benchmarking environments for their evaluation under controlled conditions. In this paper, we propose a generative platform for benchmarking performance and resilience engineering approaches in microservice architectures, comprising an underlying metamodel, a generation platform, and supporting services for workload generation, problem injection, and monitoring.
The advancement in technology has changed how people work and what software and hardware people use. From conventional personal computer to GPU, hardware technology and capability have dramatically improved so does the operating systems that come along. Unfortunately, current industry practice to compare OS is performed with single perspective. It is either benchmark the hardware level performance or performs penetration testing to check the security features of an OS. This rigid method of benchmarking does not really reflect the true performance of an OS as the performance analysis is not comprehensive and conclusive. To illustrate this deficiency, the study performed hardware level and operational level benchmarking on Windows XP, Windows 7 and Windows 8 and the results indicate that there are instances where Windows XP excels over its newer counterparts. Overall, the research shows Windows 8 is a superior OS in comparison to its predecessors running on the same hardware. Furthermore, the findings also show that the automated benchmarking tools are proved less efficient benchmark systems that run on Windows XP and older OS as they do not support DirectX 11 and other advanced features that the hardware supports. There lies the need to have a unified benchmarking approach to compare other aspects of OS such as user oriented tasks and security parameters to provide a complete comparison. Therefore, this paper is proposing a unified approach for Operating System (OS) comparisons with the help of a Windows OS case study. This unified approach includes comparison of OS from three aspects which are; hardware level, operational level performance and security tests.