Biblio
Nowadays, private corporations and public institutions are dealing with constant and sophisticated cyberthreats and cyberattacks. As a general warning, organizations must build and develop a cybersecurity culture and awareness in order to defend against cybercriminals. Information Technology (IT) and Information Security (InfoSec) audits that were efficient in the past, are trying to converge into cybersecurity audits to address cyber threats, cyber risks and cyberattacks that evolve in an aggressive cyber landscape. However, the increase in number and complexity of cyberattacks and the convoluted cyberthreat landscape is challenging the running cybersecurity audit models and putting in evidence the critical need for a new cybersecurity audit model. This article reviews the best practices and methodologies of global leaders in the cybersecurity assurance and audit arena. By means of the analysis of the current approaches and theoretical background, their real scope, strengths and weaknesses are highlighted looking forward a most efficient and cohesive synthesis. As a resut, this article presents an original and comprehensive cybersecurity audit model as a proposal to be utilized for conducting cybersecurity audits in organizations and Nation States. The CyberSecurity Audit Model (CSAM) evaluates and validates audit, preventive, forensic and detective controls for all organizational functional areas. CSAM has been tested, implemented and validated along with the Cybersecurity Awareness TRAining Model (CATRAM) in a Canadian higher education institution. A research case study is being conducted to validate both models and the findings will be published accordingly.
Implementation of Internet-of-Things (IoT) can take place in many applications, for instance, automobiles, and industrial automation. We generally view the role of an Electronic Control Unit (ECU) or industrial network node that is occupied and interconnected in many different configurations in a vehicle or a factory. This condition may raise the occurrence of problems related to security issues, such as unauthorized access to data or components in ECUs or industrial network nodes. In this paper, we propose a hardware (HW)/software (SW) framework having integrated security extensions complemented with various security-related features that later can be implemented directly from the framework to All Programmable Multiprocessor System-on-Chip (AP MPSoC)-based ECUs. The framework is a software-defined one that can be configured or reconfigured in a higher level of abstraction language, including High-Level Synthesis (HLS), and the output of the framework is hardware configuration in multiprocessor or reconfigurable components in the FPGA. The system comprises high-level requirements, covert and side-channel estimation, cryptography, optimization, artificial intelligence, and partial reconfiguration. With this framework, we may reduce the design & development time, and provide significant flexibility to configure/reconfigure our framework and its target platform equipped with security extensions.
As modern web browsers gain new and increasingly powerful features the importance of impact assessments of the new functionality becomes crucial. A web privacy impact assessment of a planned web browser feature, the Ambient Light Sensor API, indicated risks arising from the exposure of overly precise information about the lighting conditions in the user environment. The analysis led to the demonstration of direct risks of leaks of user data, such as the list of visited websites or exfiltration of sensitive content across distinct browser contexts. Our work contributed to the creation of web standards leading to decisions by browser vendors (i.e. obsolescence, non-implementation or modification to the operation of browser features). We highlight the need to consider broad risks when making reviews of new features. We offer practically-driven high-level observations lying on the intersection of web security and privacy risk engineering and modeling, and standardization. We structure our work as a case study from activities spanning over three years.
The number of applications and services that are hosted on cloud platforms is constantly increasing. Nowadays, more and more applications are hosted as services on cloud platforms, co-existing with other services in a mutually untrusted environment. Facilities such as virtual machines, containers and encrypted communication channels aim to offer isolation between the various applications and protect sensitive user data. However, such techniques are not always able to provide a secure execution environment for sensitive applications nor they offer guarantees that data are not monitored by an honest but curious provider once they reach the cloud infrastructure. The recent advancements of trusted execution environments within commodity processors, such as Intel SGX, provide a secure reverse sandbox, where code and data are isolated even from the underlying operating system. Moreover, Intel SGX provides a remote attestation mechanism, allowing the communicating parties to verify their identity as well as prove that code is executed on hardware-assisted software enclaves. Many approaches try to ensure code and data integrity, as well as enforce channel encryption schemes such as TLS, however, these techniques are not enough to achieve complete isolation and secure communications without hardware assistance or are not efficient in terms of performance. In this work, we design and implement a practical attestation system that allows the service provider to offer a seamless attestation service between the hosted applications and the end clients. Furthermore, we implement a novel caching system that is capable to eliminate the latencies introduced by the remote attestation process. Our approach allows the parties to attest one another before each communication attempt, with improved performance when compared to a standard TLS handshake.
In an increasingly asymmetric context of both instability and permanent innovation, organizations demand new capacities and learning patterns. In this sense, supervisors have adopted the metaphor of the "sandbox" as a strategy that allows their regulated parties to experiment and test new proposals in order to study them and adjust to the established compliance frameworks. Therefore, the concept of the "sandbox" is of educational interest as a way to revindicate failure as a right in the learning process, allowing students to think, experiment, ask questions and propose ideas outside the known theories, and thus overcome the mechanistic formation rooted in many of the higher education institutions. Consequently, this article proposes the application of this concept for educational institutions as a way of resignifying what students have learned.
Internet of Things devices and data sources areseeing increased use in various application areas. The pro-liferation of cheaper sensor hardware has allowed for widerscale data collection deployments. With increased numbers ofdeployed sensors and the use of heterogeneous sensor typesthere is increased scope for collecting erroneous, inaccurate orinconsistent data. This in turn may lead to inaccurate modelsbuilt from this data. It is important to evaluate this data asit is collected to determine its validity. This paper presents ananalysis of data quality as it is represented in Internet of Things(IoT) systems and some of the limitations of this representation. The paper discusses the use of trust as a heuristic to drive dataquality measurements. Trust is a well-established metric that hasbeen used to determine the validity of a piece or source of datain crowd sourced or other unreliable data collection techniques. The analysis extends to detail an appropriate framework forrepresenting data quality effectively within the big data modeland why a trust backed framework is important especially inheterogeneously sourced IoT data streams.
This article is focused on industrial networks and their security. An industrial network typically works with older devices that do not provide security at the level of today's requirements. Even protocols often do not support security at a sufficient level. It is necessary to deal with these security issues due to digitization. It is therefore required to provide other techniques that will help with security. For this reason, it is possible to deploy additional elements that will provide additional security and ensure the monitoring of the network, such as the Intrusion Detection System. These systems recognize identified signatures and anomalies. Methods of detecting security incidents by detecting anomalies in network traffic are described. The proposed methods are focused on detecting DoS attacks in the industrial Modbus protocol and operations performed outside the standard interval in the Distributed Network Protocol 3. The functionality of the performed methods is tested in the IDS system Zeek.
Managing identity across an ever-growing digital services landscape has become one of the most challenging tasks for security experts. Over the years, several Identity Management (IDM) systems were introduced and adopted to tackle with the growing demand of an identity. In this series, a recently emerging IDM system is Self-Sovereign Identity (SSI) which offers greater control and access to users regarding their identity. This distinctive feature of the SSI IDM system represents a major development towards the availability of sovereign identity to users. uPort is an emerging open-source identity management system providing sovereign identity to users, organisations, and other entities. As an emerging identity management system, it requires meticulous analysis of its architecture, working, operational services, efficiency, advantages and limitations. Therefore, this paper contributes towards achieving all of these objectives. Firstly, it presents the architecture and working of the uPort identity management system. Secondly, it develops a Decentralized Application (DApp) to demonstrate and evaluate its operational services and efficiency. Finally, based on the developed DApp and experimental analysis, it presents the advantages and limitations of the uPort identity management system.
A critical need exists for collaboration and action by government, industry, and academia to address cyber weaknesses or vulnerabilities inherent to embedded or cyber physical systems (CPS). These vulnerabilities are introduced as we leverage technologies, methods, products, and services from the global supply chain throughout a system's lifecycle. As adversaries are exploiting these weaknesses as access points for malicious purposes, solutions for system security and resilience become a priority call for action. The SAE G-32 Cyber Physical Systems Security Committee has been convened to address this complex challenge. The SAE G-32 will take a holistic systems engineering approach to integrate system security considerations to develop a Cyber Physical System Security Framework. This framework is intended to bring together multiple industries and develop a method and common language which will enable us to more effectively, efficiently, and consistently communicate a risk, cost, and performance trade space. The standard will allow System Integrators to make decisions utilizing a common framework and language to develop affordable, trustworthy, resilient, and secure systems.
The next generation of dependable embedded systems feature autonomy and higher levels of interconnection. Autonomy is commonly achieved with the support of artificial intelligence algorithms that pose high computing demands on the hardware platform, reaching a high performance scale. This involves a dramatic increase in software and hardware complexity, fact that together with the novelty of the technology, raises serious concerns regarding system dependability. Traditional approaches for certification require to demonstrate that the system will be acceptably safe to operate before it is deployed into service. The nature of autonomous systems, with potentially infinite scenarios, configurations and unanticipated interactions, makes it increasingly difficult to support such claim at design time. In this context, the extended networking technologies can be exploited to collect post-deployment evidence that serve to oversee whether safety assumptions are preserved during operation and to continuously improve the system through regular software updates. These software updates are not only convenient for critical bug fixing but also necessary for keeping the interconnected system resilient against security threats. However, such approach requires a recondition of the traditional certification practices.