Biblio
To build a secure communications software, Vulnerability Prediction Models (VPMs) are used to predict vulnerable software modules in the software system before software security testing. At present many software security metrics have been proposed to design a VPM. In this paper, we predict vulnerable classes in a software system by establishing the system's weighted software network. The metrics are obtained from the nodes' attributes in the weighted software network. We design and implement a crawler tool to collect all public security vulnerabilities in Mozilla Firefox. Based on these data, the prediction model is trained and tested. The results show that the VPM based on weighted software network has a good performance in accuracy, precision, and recall. Compared to other studies, it shows that the performance of prediction has been improved greatly in Pr and Re.
To manage cybersecurity risks in practice, a simple yet effective method to assess suchs risks for individual systems is needed. With time-to-compromise (TTC), McQueen et al. (2005) introduced such a metric that measures the expected time that a system remains uncompromised given a specific threat landscape. Unlike other approaches that require complex system modeling to proceed, TTC combines simplicity with expressiveness and therefore has evolved into one of the most successful cybersecurity metrics in practice. We revisit TTC and identify several mathematical and methodological shortcomings which we address by embedding all aspects of the metric into the continuous domain and the possibility to incorporate information about vulnerability characteristics and other cyber threat intelligence into the model. We propose $\beta$-TTC, a formal extension of TTC which includes information from CVSS vectors as well as a continuous attacker skill based on a $\beta$-distribution. We show that our new metric (1) remains simple enough for practical use and (2) gives more realistic predictions than the original TTC by using data from a modern and productively used vulnerability database of a national CERT.
Collaborative filtering (CF) recommender system has been widely used for its well performing in personalized recommendation, but CF recommender system is vulnerable to shilling attacks in which shilling attack profiles are injected into the system by attackers to affect recommendations. Design robust recommender system and propose attack detection methods are the main research direction to handle shilling attacks, among which unsupervised PCA is particularly effective in experiment, but if we have no information about the number of shilling attack profiles, the unsupervised PCA will be suffered. In this paper, a new unsupervised detection method which combine PCA and data complexity has been proposed to detect shilling attacks. In the proposed method, PCA is used to select suspected attack profiles, and data complexity is used to pick out the authentic profiles from suspected attack profiles. Compared with the traditional PCA, the proposed method could perform well and there is no need to determine the number of shilling attack profiles in advance.
Public cloud data storage services were considered as a potential alternative to store low-cost digital data in the short term. They are offered by different providers on the Internet. Some providers offer limited free plans for the users who are starting the service. However, data security concern arises when data stored are considered as a valuable asset. This study explores the usage of secret sharing scheme: Rabin's IDA and Shamir's SSA to implement a tool called dCloud for file protection stored in public cloud storage in a seamless way. It addresses data security by hiding its complexities when targeting ordinary non-technical users. The secret key is automatically generated by dCloud in a secure random way on Rabin's IDA. Shamir's SSA completes the process through dispersing the key into each of Rabin's IDA output files. Moreover, the Hash value of the original file is added to each of those output files to confirm the integrity of the file during reconstruction. Besides, the authentication key is used to communicate with all of the defined service providers during storage and reconstruction as well. It is stored into local secure key-store. By having a key to access the key-store, an ordinary non-technical user will be able to use dCloud to store and retrieve targeted file within defined public cloud storage services securely.
To manage cybersecurity risks in practice, a simple yet effective method to assess suchs risks for individual systems is needed. With time-to-compromise (TTC), McQueen et al. (2005) introduced such a metric that measures the expected time that a system remains uncompromised given a specific threat landscape. Unlike other approaches that require complex system modeling to proceed, TTC combines simplicity with expressiveness and therefore has evolved into one of the most successful cybersecurity metrics in practice. We revisit TTC and identify several mathematical and methodological shortcomings which we address by embedding all aspects of the metric into the continuous domain and the possibility to incorporate information about vulnerability characteristics and other cyber threat intelligence into the model. We propose β-TTC, a formal extension of TTC which includes information from CVSS vectors as well as a continuous attacker skill based on a β-distribution. We show that our new metric (1) remains simple enough for practical use and (2) gives more realistic predictions than the original TTC by using data from a modern and productively used vulnerability database of a national CERT.
Software verification has been well applied in safety critical areas and has shown the ability to provide better quality assurance for modern software. However, as lines of code and complexity of software systems increase, the scalability of verification becomes a challenge. In this paper, we present an automatic software verification framework TSV to address the scalability issues: (i) the extended structural abstraction and property-guided program slicing to solve large-scale program verification problem, saving time and memory without losing accuracy; (ii) automatically select different verification methods according to the program and property context to improve the verification efficiency. For evaluation, we compare TSV's different configurations with existing C program verifiers based on open benchmarks. We found that TSV with auto-selection performs better than with bounded model checking only or with extended structural abstraction only. Compared to existing tools such as CMBC and CPAChecker, it acquires 10%-20% improvement of accuracy and 50%-90% improvement of memory consumption.
Hardware information flow analysis detects security vulnerabilities resulting from unintended design flaws, timing channels, and hardware Trojans. These information flow models are typically generated in a general way, which includes a significant amount of redundancy that is irrelevant to the specified security properties. In this work, we propose a property specific approach for information flow security. We create information flow models tailored to the properties to be verified by performing a property specific search to identify security critical paths. This helps find suspicious signals that require closer inspection and quickly eliminates portions of the design that are free of security violations. Our property specific trimming technique reduces the complexity of the security model; this accelerates security verification and restricts potential security violations to a smaller region which helps quickly pinpoint hardware security vulnerabilities.
Cybercrime has been regarded understandably as a consequent compromise that follows the advent and perceived success of the computer and internet technologies. Equally effecting the privacy, trust, finance and welfare of the wealthy and low-income individuals and organizations, this menace has shown no indication of slowing down. Reports across the world have consistently shown exponential increase in the numbers and costs of cyber-incidents, and more worriedly low conviction rates of cybercriminals, over the years. Stakeholders increasingly explore ways to keep up with containing cyber-incidents by devising tools and techniques to increase the overall efficiency of investigations, but the gap keeps getting wider. However, criminal profiling - an investigative technique that has been proven to provide accurate and valuable directions to traditional crime investigations - has not seen a widespread application, including a formal methodology, to cybercrime investigations due to difficulties in its seamless transference. This paper, in a bid to address this problem, seeks to preliminarily identify the exact benefits criminal profiling has brought to successful traditional crime investigations and the benefits it can translate to cybercrime investigations, identify the challenges posed by the cyber-scene to its implementation in cybercrime investigations, and proffer a practicable solution.
In this paper, we consider ways of organizing group authentication, as well as the features of constructing the isogeny of elliptic curves. The work includes the study of isogeny graphs and their application in postquantum systems. A hierarchical group authentication scheme has been developed using transformations based on the search for isogeny of elliptic curves.
For decades, embedded systems, ranging from intelligence, surveillance, and reconnaissance (ISR) sensors to electronic warfare and electronic signal intelligence systems, have been an integral part of U.S. Department of Defense (DoD) mission systems. These embedded systems are increasingly the targets of deliberate and sophisticated attacks. Developers thus need to focus equally on functionality and security in both hardware and software development. For critical missions, these systems must be entrusted to perform their intended functions, prevent attacks, and even operate with resilience under attacks. The processor in a critical system must thus provide not only a root of trust, but also a foundation to monitor mission functions, detect anomalies, and perform recovery. We have developed a Lincoln Asymmetric Multicore Processing (LAMP) architecture, which mitigates adversarial cyber effects with separation and cryptography and provides a foundation to build a resilient embedded system. We will describe a design environment that we have created to enable the co-design of functionality and security for mission assurance.
Modern software development and deployment practices encourage complexity and bloat while unintentionally sacrificing efficiency and security. A major driver in this is the overwhelming emphasis on programmers' productivity. The constant demands to speed up development while reducing costs have forced a series of individual decisions and approaches throughout software engineering history that have led to this point. The current state-of-the-practice in the field is a patchwork of architectures and frameworks, packed full of features in order to appeal to: the greatest number of people, obscure use cases, maximal code reuse, and minimal developer effort. The Office of Naval Research (ONR) Total Platform Cyber Protection (TPCP) program seeks to de-bloat software binaries late in the life-cycle with little or no access to the source code or the development process.
Feature extraction and feature selection are the first tasks in pre-processing of input logs in order to detect cybersecurity threats and attacks by utilizing data mining techniques in the field of Artificial Intelligence. When it comes to the analysis of heterogeneous data derived from different sources, these tasks are found to be time-consuming and difficult to be managed efficiently. In this paper, we present an approach for handling feature extraction and feature selection utilizing machine learning algorithms for security analytics of heterogeneous data derived from different network sensors. The approach is implemented in Apache Spark, using its python API, named pyspark.
The key factors for deploying successful services is centered on the service design practices adopted by an enterprise. The design level information should be validated and measures are required to quantify the structural attributes. The metrics at this stage will support an early discovery of design flaws and help designers to predict the capabilities of service oriented architecture (SOA) adoption. In this work, we take a deeper look at how we can forecast the key SOA capabilities infrastructure efficiency and service reuse from the service designs modeled by SOA modeling language. The proposed approach defines metrics based on the structural and domain level similarity of service operations. The proposed metrics are analytically validated with respect to software engineering metrics properties. Moreover, a tool has been developed to automate the proposed approach and the results indicate that the metrics predict the SOA capabilities at the service design stage. This work can be further extended to predict the business based capabilities of SOA adoption such as flexibility and agility.
The current AI revolution provides us with many new, but often very complex algorithmic systems. This complexity does not only limit understanding, but also acceptance of e.g. deep learning methods. In recent years, explainable AI (XAI) has been proposed as a remedy. However, this research is rarely supported by publications on explanations from social sciences. We suggest a bottom-up approach to explanations for (game) AI, by starting from a baseline definition of understandability informed by the concept of limited human working memory. We detail our approach and demonstrate its application to two games from the GVGAI framework. Finally, we discuss our vision of how additional concepts from social sciences can be integrated into our proposed approach and how the results can be generalised.
Internet of Things (IoT) distributed secure data management system is characterized by authentication, privacy policies to preserve data integrity. Multi-phase security and privacy policies ensure confidentiality and trust between the users and service providers. In this regard, we present a novel Two-phase Incentive-based Secure Key (TISK) system for distributed data management in IoT. The proposed system classifies the IoT user nodes and assigns low-level, high-level security keys for data transactions. Low-level secure keys are generic light-weight keys used by the data collector nodes and data aggregator nodes for trusted transactions. TISK phase-I Generic Service Manager (GSM-C) module verifies the IoT devices based on self-trust incentive and server-trust incentive levels. High-level secure keys are dedicated special purpose keys utilized by data manager nodes and data expert nodes for authorized transactions. TISK phase-II Dedicated Service Manager (DSM-C) module verifies the certificates issued by GSM-C module. DSM-C module further issues high-level secure keys to data manager nodes and data expert nodes for specific purpose transactions. Simulation results indicate that the proposed TISK system reduces the key complexity and key cost to ensure distributed secure data management in IoT network.
Data storage in cloud should come along with high safety and confidentiality. It is accountability of cloud service provider to guarantee the availability and security of client data. There exist various alternatives for storage services but confidentiality and complexity solutions for database as a service are still not satisfactory. Proposed system gives alternative solution for database as a service that integrates benefits of different services along with advance encryption techniques. It yields possibility of applying concurrency on encrypted data. This alternative provides supporting facility to connect dispersed clients with elimination of intermediate proxy by which simplicity can acquired. Performance of proposed system evaluated on basis of theoretical analyses.
Current software platforms for service composition are based on orchestration, choreography or hierarchical orchestration. However, such approaches for service composition only support partial compositionality; thereby, increasing the complexity of SOA development. In this paper, we propose DX-MAN, a platform that supports total compositionality. We describe the main concepts of DX-MAN with the help of a case study based on the popular MusicCorp.
Current technologies to include cloud computing, social networking, mobile applications and crowd and synthetic intelligence, coupled with the explosion in storage and processing power, are evolving massive-scale marketplaces for a wide variety of resources and services. They are also enabling unprecedented forms and levels of collaborations among human and machine entities. In this new era, trust remains the keystone of success in any relationship between two or more parties. A primary challenge is to establish and manage trust in environments where massive numbers of consumers, providers and brokers are largely autonomous with vastly diverse requirements, capabilities, and trust profiles. Most contemporary trust management solutions are oblivious to diversities in trustors' requirements and contexts, utilize direct or indirect experiences as the only form of trust computations, employ hardcoded trust computations and marginally consider collaboration in trust management. We surmise the need for reference architecture for trust management to guide the development of a wide spectrum of trust management systems. In our previous work, we presented a preliminary reference architecture for trust management which provides customizable and reconfigurable trust management operations to accommodate varying levels of diversity and trust personalization. In this paper, we present a comprehensive taxonomy for trust management and extend our reference architecture to feature collaboration as a first-class object. Our goal is to promote the development of new collaborative trust management systems, where various trust management operations would involve collaborating entities. Using the proposed architecture, we implemented a collaborative personalized trust management system. Simulation results demonstrate the effectiveness and efficiency of our system.
Due to the unavailability of signatures for previously unknown malware, non-signature malware detection schemes typically rely on analyzing program behavior. Prior behavior based non-signature malware detection schemes are either easily evadable by obfuscation or are very inefficient in terms of storage space and detection time. In this paper, we propose GZero, a graph theoretic approach fast and accurate non-signature malware detection at end hosts. GZero it is effective while being efficient in terms of both storage space and detection time. We conducted experiments on a large set of both benign software and malware. Our results show that GZero achieves more than 99% detection rate and a false positive rate of less than 1%, with less than 1 second of average scan time per program and is relatively robust to obfuscation attacks. Due to its low overheads, GZero can complement existing malware detection solutions at end hosts.