Biblio
Formal security verification of firmware interacting with hardware in modern Systems-on-Chip (SoCs) is a critical research problem. This faces the following challenges: (1) design complexity and heterogeneity, (2) semantics gaps between software and hardware, (3) concurrency between firmware/hardware and between Intellectual Property Blocks (IPs), and (4) expensive bit-precise reasoning. In this paper, we present a co-verification methodology to address these challenges. We model hardware using the Instruction-Level Abstraction (ILA), capturing firmware-visible behavior at the architecture level. This enables integrating hardware behavior with firmware in each IP into a single thread. The co-verification with multiple firmware across IPs is formulated as a multi-threaded program verification problem, for which we leverage software verification techniques. We also propose an optimization using abstraction to prevent expensive bit-precise reasoning. The evaluation of our methodology on an industry SoC Secure Boot design demonstrates its applicability in SoC security verification.
Airports are at the forefront of technological innovation, mainly due to the fact that the number of air travel passengers is exponentially increasing every year. As a result, airports enhance infrastructure's intelligence and evolve as smart facilities to support growth, by offering a pleasurable travel experience, which plays a vital role in increasing revenue of aviation sector. New challenges are coming up, which aviation has to deal and adapt, such as the integration of Industrial IoT in airport facilities and the increased use of Bring Your Own Device from travelers and employees. Cybersecurity is becoming a key enabler for safety, which is paramount in the aviation context. Smart airports strive to provide optimal services in a reliable and sustainable manner, by working around the domains of growth, efficiency, safety andsecurity. This paper researches the implementation rate of cybersecurity measures and best practices to improve airports cyber resilience. With the aim to enhance operational practices anddevelop robust cybersecurity governance in smart airports, we analyze security gaps in different areas including technical, organizational practices and policies.
This paper presents the enhancement of speech signals in a noisy environment by using a Two-Sensor Fast Normalized Least Mean Square adaptive algorithm combined with the backward blind source separation structure. A comparative study with other competitive algorithms shows the superiority of the proposed algorithm in terms of various objective criteria such as the segmental signal to noise ratio (SegSNR), the cepstral distance (CD), the system mismatch (SM) and the segmental mean square error (SegMSE).
Nowadays citizens live in a world where communication technologies offer opportunities for new interactions between people and society. Clearly, e-government is changing the way citizens relate to their government, moving the interaction of physical environment and management towards digital participation. Therefore, it is necessary for e-government to have procedures in place to prevent and lessen the negative impact of an attack or intrusion by third parties. In this research work, he focuses on the implementation of anonymous communication in a proof of concept application called “Delta”, whose function is to allow auctions and offers of products, thus marking the basis for future implementations in e-government services.
Combining conventional power networks and information communication technologies forms smart grid concept. Researches on the evolution of conventional power grid system into smart grid continue thanks to the development of communication and information technologies hopefully. Testing of smart grid systems is usually performed in simulation environments. However, achieving more effective real-world implementations, a smart grid application needs a real-world test environment, called testbed. Smart grid, which is the combination of conventional electricity line with information communication technologies, is vulnerable to cyber-attacks and this is a key challenge improving the smart grid. The vulnerabilities to cyber-attacks in smart grid arise from information communication technologies' nature inherently. Testbeds, which cyber-security researches and studies can be performed, are needed to find effective solutions against cyber-attacks capabilities in smart grid practices. In this paper, an evaluation of existing smart grid testbeds with the capability of cyber security is presented. First, background, domains, research areas and security issues in smart grid are introduced briefly. Then smart grid testbeds and features are explained. Also, existing security-oriented testbeds and cyber-attack testing capabilities of testbeds are evaluated. Finally, we conclude the study and give some recommendations for security-oriented testbed implementations.
Phishing is a form of cybercrime where an attacker imitates a real person / institution by promoting them as an official person or entity through e-mail or other communication mediums. In this type of cyber attack, the attacker sends malicious links or attachments through phishing e-mails that can perform various functions, including capturing the login credentials or account information of the victim. These e-mails harm victims because of money loss and identity theft. In this study, a software called "Anti Phishing Simulator'' was developed, giving information about the detection problem of phishing and how to detect phishing emails. With this software, phishing and spam mails are detected by examining mail contents. Classification of spam words added to the database by Bayesian algorithm is provided.
A key exchange protocol is an important primitive in the field of information and network security and is used to exchange a common secret key among various parties. A number of key exchange protocols exist in the literature and most of them are based on the Diffie-Hellman (DH) problem. But, these DH type protocols cannot resist to the modern computing technologies like quantum computing, grid computing etc. Therefore, a more powerful non-DH type key exchange protocol is required which could resist the quantum and exponential attacks. In the year 2013, Lei and Liao, thus proposed a lattice-based key exchange protocol. Their protocol was related to the NTRU-ENCRYPT and NTRU-SIGN and so, was referred as NTRU-KE. In this paper, we identify that NTRU-KE lacks the authentication mechanism and suffers from the man-in-the-middle (MITM) attack. This attack may lead to the forging the authenticated users and exchanging the wrong key.
Android applications are vulnerable to reverse engineering which could result in tampering and repackaging of applications. Even though there are many off the shelf obfuscation tools that hardens Android applications, they are limited to basic obfuscation techniques. Obfuscation techniques that transform the code segments drastically are difficult to implement on Android because of the Android runtime verifier which validates the loaded code. In this paper, we introduce a novel obfuscation technique, Android Encryption based Obfuscation (AEON), which can encrypt code segments and perform runtime decryption during execution. The encrypted code is running outside of the normal Android virtual machine, in an embeddable Java source interpreter and thereby circumventing the scrutiny of Android runtime verifier. Our obfuscation technique works well with Android source code and Dalvik bytecode.
Recent advances in pervasive computing have caused a rapid growth of the Smart Home market, where a number of otherwise mundane pieces of technology are capable of connecting to the Internet and interacting with other similar devices. However, with the lack of a commonly adopted set of guidelines, several IT companies are producing smart devices with their own proprietary standards, leading to highly heterogeneous Smart Home systems in which the interoperability of the present elements is not always implemented in the most straightforward manner. As such, understanding the cyber risk of these cyber-physical systems beyond the individual devices has become an almost intractable problem. This paper tackles this issue by introducing a Smart Home reference architecture which facilitates security analysis. Being composed by three viewpoints, it gives a high-level description of the various functions and components needed in a domestic IoT device and network. Furthermore, this document demonstrates how the architecture can be used to determine the various attack surfaces of a home automation system from which its key vulnerabilities can be determined.
Smart buildings are controlled by multiple cyber-physical systems that provide critical services such as heating, ventilation, lighting and access control. These building systems are becoming increasingly vulnerable to both cyber and physical attacks. We introduce a multi-model methodology for assessing the security of these systems, which utilises INTO-CPS, a suite of modelling, simulation, and analysis tools for designing cyber-physical systems. Using a fan coil unit case study we show how its security can be systematically assessed when subjected to Man-in-the-Middle attacks on the data connections between system components. We suggest our methodology would enable building managers and security engineers to design attack countermeasures and refine their effectiveness.
The world is fundamentally compositional, so it is natural to think of visual recognition as the recognition of basic visually primitives that are composed according to well-defined rules. This strategy allows us to recognize unseen complex concepts from simple visual primitives. However, the current trend in visual recognition follows a data greedy approach where huge amounts of data are required to learn models for any desired visual concept. In this paper, we build on the compositionality principle and develop an "algebra" to compose classifiers for complex visual concepts. To this end, we learn neural network modules to perform boolean algebra operations on simple visual classifiers. Since these modules form a complete functional set, a classifier for any complex visual concept defined as a boolean expression of primitives can be obtained by recursively applying the learned modules, even if we do not have a single training sample. As our experiments show, using such a framework, we can compose classifiers for complex visual concepts outperforming standard baselines on two well-known visual recognition benchmarks. Finally, we present a qualitative analysis of our method and its properties.
Code churn has been successfully used to identify defect inducing changes in software development. Our recent analysis of the cross-release code churn showed that several design metrics exhibit moderate correlation with the number of defects in complex systems. The goal of this paper is to explore whether cross-release code churn can be used to identify critical design change and contribute to prediction of defects for software in evolution. In our case study, we used two types of data from consecutive releases of open-source projects, with and without cross-release code churn, to build standard prediction models. The prediction models were trained on earlier releases and tested on the following ones, evaluating the performance in terms of AUC, GM and effort aware measure Pop. The comparison of their performance was used to answer our research question. The obtained results showed that the prediction model performs better when cross-release code churn is included. Practical implication of this research is to use cross-release code churn to aid in safe planning of next release in software development.
The plethora of mobile apps introduce critical challenges to digital forensics practitioners, due to the diversity and the large number (millions) of mobile apps available to download from Google play, Apple store, as well as hundreds of other online app stores. Law enforcement investigators often find themselves in a situation that on the seized mobile phone devices, there are many popular and less-popular apps with interface of different languages and functionalities. Investigators would not be able to have sufficient expert-knowledge about every single app, sometimes nor even a very basic understanding about what possible evidentiary data could be discoverable from these mobile devices being investigated. Existing literature in digital forensic field showed that most such investigations still rely on the investigator's manual analysis using mobile forensic toolkits like Cellebrite and Encase. The problem with such manual approaches is that there is no guarantee on the completeness of such evidence discovery. Our goal is to develop an automated mobile app analysis tool to analyze an app and discover what types of and where forensic evidentiary data that app generate and store locally on the mobile device or remotely on external 3rd-party server(s). With the app analysis tool, we will build a database of mobile apps, and for each app, we will create a list of app-generated evidence in terms of data types, locations (and/or sequence of locations) and data format/syntax. The outcome from this research will help digital forensic practitioners to reduce the complexity of their case investigations and provide a better completeness guarantee of evidence discovery, thereby deliver timely and more complete investigative results, and eventually reduce backlogs at crime labs. In this paper, we will present the main technical approaches for us to implement a dynamic Taint analysis tool for Android apps forensics. With the tool, we have analyzed 2,100 real-world Android apps. For each app, our tool produces the list of evidentiary data (e.g., GPS locations, device ID, contacts, browsing history, and some user inputs) that the app could have collected and stored on the devices' local storage in the forms of file or SQLite database. We have evaluated our tool using both benchmark apps and real-world apps. Our results demonstrated that the initial success of our tool in accurately discovering the evidentiary data.
In the last few decades, the relative simplicity of the logistic map made it a widely accepted point in the consideration of chaos, which is having the good properties of unpredictability, sensitiveness in the key values and ergodicity. Further, the system parameters fit the requirements of a cipher widely used in the field of cryptography, asymmetric and symmetric key chaos based cryptography, and for pseudorandom sequence generation. Also, the hardware-based embedded system is configured on FPGA devices for high performance. In this paper, a novel stream cipher using chaotic logistic map is proposed. The two chaotic logistic maps are coded using Verilog HDL and implemented on commercially available FPGA hardware using Xilinx device: XC3S250E for the part: FT256 and operated at frequency of 62.20 MHz to generate the non-recursive key which is used in key scheduling of pseudorandom number generation (PRNG) to produce the key stream. The realization of proposed cryptosystem in this FPGA device accomplishes the improved efficiency equal to 0.1186 Mbps/slice. Further, the generated binary sequence from the experiment is analyzed for X-power, thermal analysis, and randomness tests are performed using NIST statistical.
Assertions are helpful in program analysis, such as software testing and verification. The most challenging part of automatically recommending assertions is to design the assertion patterns and to insert assertions in proper locations. In this paper, we develop Weak-Assert, a weakness-oriented assertion recommendation toolkit for program analysis of C code. A weakness-oriented assertion is an assertion which can help to find potential program weaknesses. Weak-Assert uses well-designed patterns to match the abstract syntax trees of source code automatically. It collects significant messages from trees and inserts assertions into proper locations of programs. These assertions can be checked by using program analysis techniques. The experiments are set up on Juliet test suite and several actual projects in Github. Experimental results show that Weak-Assert helps to find 125 program weaknesses in 26 actual projects. These weaknesses are confirmed manually to be triggered by some test cases.
Program authorship attribution has implications for the privacy of programmers who wish to contribute code anonymously. While previous work has shown that complete files that are individually authored can be attributed, these efforts have focused on ideal data sets such as the Google Code Jam data. We explore the problem of attribution "in the wild," examining source code obtained from open source version control systems, and investigate if and how such contributions can be attributed to their authors, either individually or on a per-account basis. In this work we show that accounts belonging to open source contributors containing short, incomplete, and typically uncompilable fragments can be effectively attributed.
Silicon Physical Unclonable Function (PUF) is arguably the most promising hardware security primitive. In particular, PUFs that are capable of generating a large amount of challenge response pairs (CRPs) can be used in many security applications. However, these CRPs can also be exploited by machine learning attacks to model the PUF and predict its response. In this paper, we first show that, based on data in the public domain, two popular PUFs that can generate CRPs (i.e., arbiter PUF and reconfigurable ring oscillator (RO) PUF) can be broken by simple logistic regression (LR) attack with about 99% accuracy. We then propose a feedback structure to XOR the PUF response with the challenge and challenge the PUF again to generate the response. Results show that this successfully reduces LR's learning accuracy to the lower 50%, but artificial neural network (ANN) learning attack still has an 80% success rate. Therefore, we propose a configurable ring oscillator based dual-mode PUF which works with both odd number of inverters (like the reconfigurable RO PUF) and even number of inverters (like a bistable ring (BR) PUF). Since currently there are no known attacks that can model both RO PUF and BR PUF, the dual-mode PUF will be resistant to modeling attacks as long as we can hide its working mode from the attackers, which we achieve with two practical methods. Finally, we implement the proposed dual-mode PUF on Nexys 4 FPGA boards and collect real measurement to show that it reduces the learning accuracy of LR and ANN to the mid-50% and low 60%, respectively. In addition, it meets the PUF requirements of uniqueness, randomness, and robustness.
Deep learning technologies, which are the key components of state-of-the-art Artificial Intelligence (AI) services, have shown great success in providing human-level capabilities for a variety of tasks, such as visual analysis, speech recognition, and natural language processing and etc. Building a production-level deep learning model is a non-trivial task, which requires a large amount of training data, powerful computing resources, and human expertises. Therefore, illegitimate reproducing, distribution, and the derivation of proprietary deep learning models can lead to copyright infringement and economic harm to model creators. Therefore, it is essential to devise a technique to protect the intellectual property of deep learning models and enable external verification of the model ownership. In this paper, we generalize the "digital watermarking'' concept from multimedia ownership verification to deep neural network (DNNs) models. We investigate three DNN-applicable watermark generation algorithms, propose a watermark implanting approach to infuse watermark into deep learning models, and design a remote verification mechanism to determine the model ownership. By extending the intrinsic generalization and memorization capabilities of deep neural networks, we enable the models to learn specially crafted watermarks at training and activate with pre-specified predictions when observing the watermark patterns at inference. We evaluate our approach with two image recognition benchmark datasets. Our framework accurately (100$\backslash$%) and quickly verifies the ownership of all the remotely deployed deep learning models without affecting the model accuracy for normal input data. In addition, the embedded watermarks in DNN models are robust and resilient to different counter-watermark mechanisms, such as fine-tuning, parameter pruning, and model inversion attacks.
Autonomous systems are gaining momentum in various application domains, such as autonomous vehicles, autonomous transport robotics and self-adaptation in smart homes. Product liability regulations impose high standards on manufacturers of such systems with respect to dependability (safety, security and privacy). Today's conventional engineering methods are not adequate for providing guarantees with respect to dependability requirements in a cost-efficient manner, e.g. road tests in the automotive industry sum up millions of miles before a system can be considered sufficiently safe. System engineers will no longer be able to test and respectively formally verify autonomous systems during development time in order to guarantee the dependability requirements in advance. In this vision paper, we introduce a new holistic software systems engineering approach for autonomous systems, which integrates development time methods as well as operation time techniques. With this approach, we aim to give the users a transparent view of the confidence level of the autonomous system under use with respect to the dependability requirements. We present already obtained results and point out research goals to be addressed in the future.
Although Stylometry has been effectively used for Authorship Attribution, there is a growing number of methods being developed that allow authors to mask their identity [2, 13]. In this paper, we investigate the usage of non-traditional feature sets for Authorship Attribution. By using non-traditional feature sets, one may be able to reveal the identity of adversarial authors who are attempting to evade detection from Authorship Attribution systems that are based on more traditional feature sets. In addition, we demonstrate how GEFeS (Genetic & Evolutionary Feature Selection) can be used to evolve high-performance hybrid feature sets composed of two non-traditional feature sets for Authorship Attribution: LIWC (Linguistic Inquiry & Word Count) and Sentiment Analysis. These hybrids were able to reduce the Adversarial Effectiveness on a test set presented in [2] by approximately 33.4%.