Biblio

Found 1162 results

Filters: Keyword is Collaboration  [Clear All Filters]
2019-10-30
Redmiles, Elissa M., Zhu, Ziyun, Kross, Sean, Kuchhal, Dhruv, Dumitras, Tudor, Mazurek, Michelle L..  2018.  Asking for a Friend: Evaluating Response Biases in Security User Studies. Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. :1238-1255.

The security field relies on user studies, often including survey questions, to query end users' general security behavior and experiences, or hypothetical responses to new messages or tools. Self-report data has many benefits – ease of collection, control, and depth of understanding – but also many well-known biases stemming from people's difficulty remembering prior events or predicting how they might behave, as well as their tendency to shape their answers to a perceived audience. Prior work in fields like public health has focused on measuring these biases and developing effective mitigations; however, there is limited evidence as to whether and how these biases and mitigations apply specifically in a computer-security context. In this work, we systematically compare real-world measurement data to survey results, focusing on an exemplar, well-studied security behavior: software updating. We align field measurements about specific software updates (n=517,932) with survey results in which participants respond to the update messages that were used when those versions were released (n=2,092). This allows us to examine differences in self-reported and observed update speeds, as well as examining self-reported responses to particular message features that may correlate with these results. The results indicate that for the most part, self-reported data varies consistently and systematically with measured data. However, this systematic relationship breaks down when survey respondents are required to notice and act on minor details of experimental manipulations. Our results suggest that many insights from self-report security data can, when used with care, translate to real-world environments; however, insights about specific variations in message texts or other details may be more difficult to assess with surveys.

2019-11-26
Pradhan, Srikanta, Tripathy, Somanath, Nandi, Sukumar.  2018.  Blockchain Based Security Framework for P2P Filesharing System. 2018 IEEE International Conference on Advanced Networks and Telecommunications Systems (ANTS). :1-6.

Peer to Peer (P2P) is a dynamic and self-organized technology, popularly used in File sharing applications to achieve better performance and avoids single point of failure. The popularity of this network has attracted many attackers framing different attacks including Sybil attack, Routing Table Insertion attack (RTI) and Free Riding. Many mitigation methods are also proposed to defend or reduce the impact of such attacks. However, most of those approaches are protocol specific. In this work, we propose a Blockchain based security framework for P2P network to address such security issues. which can be tailored to any P2P file-sharing system.

2019-04-05
Yamanoue, Takashi.  2018.  A Botnet Detecting Infrastructure Using a Beneficial Botnet. Proceedings of the 2018 ACM on SIGUCCS Annual Conference. :35-42.

A beneficial botnet, which tries to cope with technology of malicious botnets such as peer to peer (P2P) networking and Domain Generation Algorithm (DGA), is discussed. In order to cope with such botnets' technology, we are developing a beneficial botnet as an anti-bot measure, using our previous beneficial bot. The beneficial botnet is a group of beneficial bots. The peer to peer (P2P) communication of malicious botnet is hard to detect by a single Intrusion Detection System (IDS). Our beneficial botnet has the ability to detect P2P communication, using collaboration of our beneficial bots. The beneficial bot could detect communication of the pseudo botnet which mimics malicious botnet communication. Our beneficial botnet may also detect communication using DGA. Furthermore, our beneficial botnet has ability to cope with new technology of new botnets, because our beneficial botnet has the ability to evolve, as same as malicious botnets.

2019-09-26
Reijers, Niels, Shih, Chi-Sheng.  2018.  CapeVM: A Safe and Fast Virtual Machine for Resource-Constrained Internet-of-Things Devices. Proceedings of the 16th ACM Conference on Embedded Networked Sensor Systems. :250-263.

This paper presents CapeVM, a sensor node virtual machine aimed at delivering both high performance and a sandboxed execution environment that ensures malicious code cannot corrupt the VM's internal state or perform actions not allowed by the VM. CapeVM uses Ahead-of-Time compilation and introduces a range of optimisations to eliminate most of the overhead present in previous work on sensor node AOT compilers. A sandboxed execution environment is guaranteed by a set of checks. The structured nature of the VM's instruction set allows the VM to perform most checks at load time, reducing the need for expensive run-time checks compared to native code approaches. While some overhead from using a VM and adding sandbox checks cannot be avoided, CapeVM's optimisations reduce this overhead dramatically. We evaluate CapeVM using a set of IoT applications and show this results in a performance just 2.1x slower than unsandboxed native code. Thus, CapeVM combines the desirable properties ofexisting work on both sandboxed execution and virtual machines for sensor nodes, with significantly improved performance.

Elliott, A. S., Ruef, A., Hicks, M., Tarditi, D..  2018.  Checked C: Making C Safe by Extension. 2018 IEEE Cybersecurity Development (SecDev). :53-60.

This paper presents Checked C, an extension to C designed to support spatial safety, implemented in Clang and LLVM. Checked C's design is distinguished by its focus on backward-compatibility, incremental conversion, developer control, and enabling highly performant code. Like past approaches to a safer C, Checked C employs a form of checked pointer whose accesses can be statically or dynamically verified. Performance evaluation on a set of standard benchmark programs shows overheads to be relatively low. More interestingly, Checked C introduces the notions of a checked region and bounds-safe interfaces.

2019-03-06
Wang, Jiawen, Wang, Wai Ming, Tian, Zonggui, Li, Zhi.  2018.  Classification of Multiple Affective Attributes of Customer Reviews: Using Classical Machine Learning and Deep Learning. Proceedings of the 2Nd International Conference on Computer Science and Application Engineering. :94:1-94:5.

Affective1 engineering is a methodology of designing products by collecting customer affective needs and translating them into product designs. It usually begins with questionnaire surveys to collect customer affective demands and responses. However, this process is expensive, which can only be conducted periodically in a small scale. With the rapid development of e-commerce, a larger number of customer product reviews are available on the Internet. Many studies have been done using opinion mining and sentiment analysis. However, the existing studies focus on the polarity classification from a single perspective (such as positive and negative). The classification of multiple affective attributes receives less attention. In this paper, 3-class classifications of four different affective attributes (i.e. Soft-Hard, Appealing-Unappealing, Handy-Bulky, and Reliable-Shoddy) are performed by using two classical machine learning algorithms (i.e. Softmax regression and Support Vector Machine) and two deep learning methods (i.e. Restricted Boltzmann machines and Deep Belief Network) on an Amazon dataset. The results show that the accuracy of deep learning methods is above 90%, while the accuracy of classical machine learning methods is about 64%. This indicates that deep learning methods are significantly better than classical machine learning methods.

2019-09-13
P. Damacharla, A. Y. Javaid, J. J. Gallimore, V. K. Devabhaktuni.  2018.  Common Metrics to Benchmark Human-Machine Teams (HMT): A Review. IEEE Access. 6:38637-38655.

A significant amount of work is invested in human-machine teaming (HMT) across multiple fields. Accurately and effectively measuring system performance of an HMT is crucial for moving the design of these systems forward. Metrics are the enabling tools to devise a benchmark in any system and serve as an evaluation platform for assessing the performance, along with the verification and validation, of a system. Currently, there is no agreed-upon set of benchmark metrics for developing HMT systems. Therefore, identification and classification of common metrics are imperative to create a benchmark in the HMT field. The key focus of this review is to conduct a detailed survey aimed at identification of metrics employed in different segments of HMT and to determine the common metrics that can be used in the future to benchmark HMTs. We have organized this review as follows: identification of metrics used in HMTs until now, and classification based on functionality and measuring techniques. Additionally, we have also attempted to analyze all the identified metrics in detail while classifying them as theoretical, applied, real-time, non-real-time, measurable, and observable metrics. We conclude this review with a detailed analysis of the identified common metrics along with their usage to benchmark HMTs.

2019-10-30
Belkin, Maxim, Haas, Roland, Arnold, Galen Wesley, Leong, Hon Wai, Huerta, Eliu A., Lesny, David, Neubauer, Mark.  2018.  Container Solutions for HPC Systems: A Case Study of Using Shifter on Blue Waters. Proceedings of the Practice and Experience on Advanced Research Computing. :43:1-43:8.

Software container solutions have revolutionized application development approaches by enabling lightweight platform abstractions within the so-called "containers." Several solutions are being actively developed in attempts to bring the benefits of containers to high-performance computing systems with their stringent security demands on the one hand and fundamental resource sharing requirements on the other. In this paper, we discuss the benefits and short-comings of such solutions when deployed on real HPC systems and applied to production scientific applications. We highlight use cases that are either enabled by or significantly benefit from such solutions. We discuss the efforts by HPC system administrators and support staff to support users of these type of workloads on HPC systems not initially designed with these workloads in mind focusing on NCSA's Blue Waters system.

2019-08-05
Gerard, B., Rebaï, S. B., Voos, H., Darouach, M..  2018.  Cyber Security and Vulnerability Analysis of Networked Control System Subject to False-Data Injection. 2018 Annual American Control Conference (ACC). :992-997.

In the present paper, the problem of networked control system (NCS) cyber security is considered. The geometric approach is used to evaluate the security and vulnerability level of the controlled system. The proposed results are about the so-called false data injection attacks and show how imperfectly known disturbances can be used to perform undetectable, or at least stealthy, attacks that can make the NCS vulnerable to attacks from malicious outsiders. A numerical example is given to illustrate the approach.

2019-10-30
Demoulin, Henri Maxime, Vaidya, Tavish, Pedisich, Isaac, DiMaiolo, Bob, Qian, Jingyu, Shah, Chirag, Zhang, Yuankai, Chen, Ang, Haeberlen, Andreas, Loo, Boon Thau et al..  2018.  DeDoS: Defusing DoS with Dispersion Oriented Software. Proceedings of the 34th Annual Computer Security Applications Conference. :712-722.

This paper presents DeDoS, a novel platform for mitigating asymmetric DoS attacks. These attacks are particularly challenging since even attackers with limited resources can exhaust the resources of well-provisioned servers. DeDoS offers a framework to deploy code in a highly modular fashion. If part of the application stack is experiencing a DoS attack, DeDoS can massively replicate only the affected component, potentially across many machines. This allows scaling of the impacted resource separately from the rest of the application stack, so that resources can be precisely added where needed to combat the attack. Our evaluation results show that DeDoS incurs reasonable overheads in normal operations, and that it significantly outperforms standard replication techniques when defending against a range of asymmetric attacks.

2019-11-12
Padon, Oded.  2018.  Deductive Verification of Distributed Protocols in First-Order Logic. 2018 Formal Methods in Computer Aided Design (FMCAD). :1-1.

Formal verification of infinite-state systems, and distributed systems in particular, is a long standing research goal. In the deductive verification approach, the programmer provides inductive invariants and pre/post specifications of procedures, reducing the verification problem to checking validity of logical verification conditions. This check is often performed by automated theorem provers and SMT solvers, substantially increasing productivity in the verification of complex systems. However, the unpredictability of automated provers presents a major hurdle to usability of these tools. This problem is particularly acute in case of provers that handle undecidable logics, for example, first-order logic with quantifiers and theories such as arithmetic. The resulting extreme sensitivity to minor changes has a strong negative impact on the convergence of the overall proof effort.

2019-10-15
Zhang, F., Deng, Z., He, Z., Lin, X., Sun, L..  2018.  Detection Of Shilling Attack In Collaborative Filtering Recommender System By Pca And Data Complexity. 2018 International Conference on Machine Learning and Cybernetics (ICMLC). 2:673–678.

Collaborative filtering (CF) recommender system has been widely used for its well performing in personalized recommendation, but CF recommender system is vulnerable to shilling attacks in which shilling attack profiles are injected into the system by attackers to affect recommendations. Design robust recommender system and propose attack detection methods are the main research direction to handle shilling attacks, among which unsupervised PCA is particularly effective in experiment, but if we have no information about the number of shilling attack profiles, the unsupervised PCA will be suffered. In this paper, a new unsupervised detection method which combine PCA and data complexity has been proposed to detect shilling attacks. In the proposed method, PCA is used to select suspected attack profiles, and data complexity is used to pick out the authentic profiles from suspected attack profiles. Compared with the traditional PCA, the proposed method could perform well and there is no need to determine the number of shilling attack profiles in advance.

2019-03-06
Khalil, Issa M., Guan, Bei, Nabeel, Mohamed, Yu, Ting.  2018.  A Domain Is Only As Good As Its Buddies: Detecting Stealthy Malicious Domains via Graph Inference. Proceedings of the Eighth ACM Conference on Data and Application Security and Privacy. :330-341.

Inference based techniques are one of the major approaches to analyze DNS data and detect malicious domains. The key idea of inference techniques is to first define associations between domains based on features extracted from DNS data. Then, an inference algorithm is deployed to infer potential malicious domains based on their direct/indirect associations with known malicious ones. The way associations are defined is key to the effectiveness of an inference technique. It is desirable to be both accurate (i.e., avoid falsely associating domains with no meaningful connections) and with good coverage (i.e., identify all associations between domains with meaningful connections). Due to the limited scope of information provided by DNS data, it becomes a challenge to design an association scheme that achieves both high accuracy and good coverage. In this paper, we propose a new approach to identify domains controlled by the same entity. Our key idea is an in-depth analysis of active DNS data to accurately separate public IPs from dedicated ones, which enables us to build high-quality associations between domains. Our scheme avoids the pitfall of naive approaches that rely on weak "co-IP" relationship of domains (i.e., two domains are resolved to the same IP) that results in low detection accuracy, and, meanwhile, identifies many meaningful connections between domains that are discarded by existing state-of-the-art approaches. Our experimental results show that the proposed approach not only significantly improves the domain coverage compared to existing approaches but also achieves better detection accuracy. Existing path-based inference algorithms are specifically designed for DNS data analysis. They are effective but computationally expensive. To further demonstrate the strength of our domain association scheme as well as improve the inference efficiency, we construct a new domain-IP graph that can work well with the generic belief propagation algorithm. Through comprehensive experiments, we show that this approach offers significant efficiency and scalability improvement with only a minor impact to detection accuracy, which suggests that such a combination could offer a good tradeoff for malicious domain detection in practice.

2019-09-26
Pfeffer, T., Herber, P., Druschke, L., Glesner, S..  2018.  Efficient and Safe Control Flow Recovery Using a Restricted Intermediate Language. 2018 IEEE 27th International Conference on Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE). :235-240.

Approaches for the automatic analysis of security policies on source code level cannot trivially be applied to binaries. This is due to the lacking high-level semantics of low-level object code, and the fundamental problem that control-flow recovery from binaries is difficult. We present a novel approach to recover the control-flow of binaries that is both safe and efficient. The key idea of our approach is to use the information contained in security mechanisms to approximate the targets of computed branches. To achieve this, we first define a restricted control transition intermediate language (RCTIL), which restricts the number of possible targets for each branch to a finite number of given targets. Based on this intermediate language, we demonstrate how a safe model of the control flow can be recovered without data-flow analyses. Our evaluation shows that that makes our solution more efficient than existing solutions.

2020-03-09
Gope, Prosanta, Sikdar, Biplab.  2018.  An Efficient Privacy-Preserving Dynamic Pricing-Based Billing Scheme for Smart Grids. 2018 IEEE Conference on Communications and Network Security (CNS). :1–2.

This paper proposes a lightweight and privacy-preserving data aggregation scheme for dynamic electricity pricing based billing in smart grids using the concept of single-pass authenticated encryption (AE). Unlike existing literature that only considers static pricing, to the best of our knowledge, this is the first paper to address privacy under dynamic pricing.

2019-03-06
Liu, Y., Wang, Y., Lombardi, F., Han, J..  2018.  An Energy-Efficient Stochastic Computational Deep Belief Network. 2018 Design, Automation Test in Europe Conference Exhibition (DATE). :1175-1178.

Deep neural networks (DNNs) are effective machine learning models to solve a large class of recognition problems, including the classification of nonlinearly separable patterns. The applications of DNNs are, however, limited by the large size and high energy consumption of the networks. Recently, stochastic computation (SC) has been considered to implement DNNs to reduce the hardware cost. However, it requires a large number of random number generators (RNGs) that lower the energy efficiency of the network. To overcome these limitations, we propose the design of an energy-efficient deep belief network (DBN) based on stochastic computation. An approximate SC activation unit (A-SCAU) is designed to implement different types of activation functions in the neurons. The A-SCAU is immune to signal correlations, so the RNGs can be shared among all neurons in the same layer with no accuracy loss. The area and energy of the proposed design are 5.27% and 3.31% (or 26.55% and 29.89%) of a 32-bit floating-point (or an 8-bit fixed-point) implementation. It is shown that the proposed SC-DBN design achieves a higher classification accuracy compared to the fixed-point implementation. The accuracy is only lower by 0.12% than the floating-point design at a similar computation speed, but with a significantly lower energy consumption.

2019-11-12
Basin, David, Dreier, Jannik, Hirschi, Lucca, Radomirovic, Sa\v sa, Sasse, Ralf, Stettler, Vincent.  2018.  A Formal Analysis of 5G Authentication. Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. :1383-1396.

Mobile communication networks connect much of the world's population. The security of users' calls, SMSs, and mobile data depends on the guarantees provided by the Authenticated Key Exchange protocols used. For the next-generation network (5G), the 3GPP group has standardized the 5G AKA protocol for this purpose. We provide the first comprehensive formal model of a protocol from the AKA family: 5G AKA. We also extract precise requirements from the 3GPP standards defining 5G and we identify missing security goals. Using the security protocol verification tool Tamarin, we conduct a full, systematic, security evaluation of the model with respect to the 5G security goals. Our automated analysis identifies the minimal security assumptions required for each security goal and we find that some critical security goals are not met, except under additional assumptions missing from the standard. Finally, we make explicit recommendations with provably secure fixes for the attacks and weaknesses we found. 

Duan, Zhangbo, Mao, Hongliang, Chen, Zhidong, Bai, Xiaomin, Hu, Kai, Talpin, Jean-Pierre.  2018.  Formal Modeling and Verification of Blockchain System. Proceedings of the 10th International Conference on Computer Modeling and Simulation. :231-235.

As a decentralized and distributed secure storage technology, the notion of blockchain is now widely used for electronic trading in finance, for issuing digital certificates, for copyrights management, and for many other security-critical applications. With applications in so many domains with high-assurance requirements, the formalization and verification of safety and security properties of blockchain becomes essential, and the aim of the present paper. We present the model-based formalization, simulation and verification of a blockchain protocol by using the SDL formalism of Telelogic Tau. We consider the hierarchical and modular SDL model of the blockchain protocol and exercise a methodology to formally simulate and verify it. This way, we show how to effectively increase the security and safety of blockchain in order to meet high assurance requirements demanded by its application domains. Our work also provides effective support for assessing different network consensus algorithms, which are key components in blockchain protocols, as well as on the topology of blockchain networks. In conclusion, our approach contributes to setting up a verification methodology for future blockchain standards in digital trading.

2019-02-13
Gür, Kamil Doruk, Polyakov, Yuriy, Rohloff, Kurt, Ryan, Gerard W., Savas, Erkay.  2018.  Implementation and Evaluation of Improved Gaussian Sampling for Lattice Trapdoors. Proceedings of the 6th Workshop on Encrypted Computing & Applied Homomorphic Cryptography. :61–71.

We report on our implementation of a new Gaussian sampling algorithm for lattice trapdoors. Lattice trapdoors are used in a wide array of lattice-based cryptographic schemes including digital signatures, attributed-based encryption, program obfuscation and others. Our implementation provides Gaussian sampling for trapdoor lattices with prime moduli, and supports both single- and multi-threaded execution. We experimentally evaluate our implementation through its use in the GPV hash-and-sign digital signature scheme as a benchmark. We compare our design and implementation with prior work reported in the literature. The evaluation shows that our implementation 1) has smaller space requirements and faster runtime, 2) does not require multi-precision floating-point arithmetic, and 3) can be used for a broader range of cryptographic primitives than previous implementations.

2019-05-20
Gschwandtner, Mathias, Demetz, Lukas, Gander, Matthias, Maier, Ronald.  2018.  Integrating Threat Intelligence to Enhance an Organization's Information Security Management. Proceedings of the 13th International Conference on Availability, Reliability and Security. :37:1-37:8.

As security incidents might have disastrous consequences on an enterprise's information technology (IT), organizations need to secure their IT against threats. Threat intelligence (TI) promises to provide actionable information about current threats for information security management systems (ISMS). Common information range from malware characteristics to observed perpetrator origins that allow customizing security controls. The aim of this article is to assess the impact of utilizing public available threat feeds within the corporate process on an organization's security information level. We developed a framework to integrate TI for large corporations and evaluated said framework in cooperation with a global acting manufacturer and retailer. During the development of the TI framework, a specific provider of TI was analyzed and chosen for integration within the process of vulnerability management. The evaluation of this exemplary integration was assessed by members of the information security department at the cooperating enterprise. During our evaluation it was emphasized that a prioritization of management activities based on whether threats that have been observed in the wild are targeting them or similar companies. Furthermore, indicators of compromise (IoC) provided by the chosen TI source, can be automatically integrated utilizing a provided software development kit. Theoretical relevance is based on the contribution towards the verification of proposed benefits of TI integration, such as increasing the resilience of an enterprise network, within a real-world environment. Overall, practitioners suggest that TI integration should result in enhanced management of security budgets and more resilient enterprise networks.

2019-11-19
Wimmer, Maria A., Boneva, Rositsa, di Giacomo, Debora.  2018.  Interoperability Governance: A Definition and Insights from Case Studies in Europe. Proceedings of the 19th Annual International Conference on Digital Government Research: Governance in the Data Age. :14:1-14:11.

Interoperability has become a crucial value in European e-government developments, as promoted by the Digital Single Market strategy and the Tallinn Declaration. The European Union and its Member States have made considerable investments in improving the understanding of interoperability and in developing interoperable building blocks to support cross-border data exchange and public service provisioning. This includes recent updates of the European Interoperability Framework (EIF) and European Interoperability Reference Architecture (EIRA), as well as the publication of a number of generic and domain specific architecture and solutions building blocks such as digital identification or electronic delivery services. While in the previous version of the EIF, interoperability governance was not clearly developed, the new version of 2017 puts interoperability governance as a concept that spans across the different interoperability layers (legal, organizational, semantic and technical) and that builds the frame for interoperability overall. In this paper, we develop a definition of interoperability governance from a literature review and we put forward a model to investigate interoperability governance models at European and Member State levels. Based on several case studies of EU institutions and Member States, we could draw recommendations for what the key aspects of interoperability governance are to successfully diffuse interoperability into public service provisioning.

Kurnikov, Arseny, Paverd, Andrew, Mannan, Mohammad, Asokan, N..  2018.  Keys in the Clouds: Auditable Multi-Device Access to Cryptographic Credentials. Proceedings of the 13th International Conference on Availability, Reliability and Security. :40:1-40:10.

Personal cryptographic keys are the foundation of many secure services, but storing these keys securely is a challenge, especially if they are used from multiple devices. Storing keys in a centralized location, like an Internet-accessible server, raises serious security concerns (e.g. server compromise). Hardware-based Trusted Execution Environments (TEEs) are a well-known solution for protecting sensitive data in untrusted environments, and are now becoming available on commodity server platforms. Although the idea of protecting keys using a server-side TEE is straight-forward, in this paper we validate this approach and show that it enables new desirable functionality. We describe the design, implementation, and evaluation of a TEE-based Cloud Key Store (CKS), an online service for securely generating, storing, and using personal cryptographic keys. Using remote attestation, users receive strong assurance about the behaviour of the CKS, and can authenticate themselves using passwords while avoiding typical risks of password-based authentication like password theft or phishing. In addition, this design allows users to i) define policy-based access controls for keys; ii) delegate keys to other CKS users for a specified time and/or a limited number of uses; and iii) audit all key usages via a secure audit log. We have implemented a proof of concept CKS using Intel SGX and integrated this into GnuPG on Linux and OpenKeychain on Android. Our CKS implementation performs approximately 6,000 signature operations per second on a single desktop PC. The latency is in the same order of magnitude as using locally-stored keys, and 20x faster than smart cards.

2019-09-26
Jackson, K. A., Bennett, B. T..  2018.  Locating SQL Injection Vulnerabilities in Java Byte Code Using Natural Language Techniques. SoutheastCon 2018. :1-5.

With so much our daily lives relying on digital devices like personal computers and cell phones, there is a growing demand for code that not only functions properly, but is secure and keeps user data safe. However, ensuring this is not such an easy task, and many developers do not have the required skills or resources to ensure their code is secure. Many code analysis tools have been written to find vulnerabilities in newly developed code, but this technology tends to produce many false positives, and is still not able to identify all of the problems. Other methods of finding software vulnerabilities automatically are required. This proof-of-concept study applied natural language processing on Java byte code to locate SQL injection vulnerabilities in a Java program. Preliminary findings show that, due to the high number of terms in the dataset, using singular decision trees will not produce a suitable model for locating SQL injection vulnerabilities, while random forest structures proved more promising. Still, further work is needed to determine the best classification tool.

2019-10-15
Coleman, M. S., Doody, D. P., Shields, M. A..  2018.  Machine Learning for Real-Time Data-Driven Security Practices. 2018 29th Irish Signals and Systems Conference (ISSC). :1–6.

The risk of cyber-attacks exploiting vulnerable organisations has increased significantly over the past several years. These attacks may combine to exploit a vulnerability breach within a system's protection strategy, which has the potential for loss, damage or destruction of assets. Consequently, every vulnerability has an accompanying risk, which is defined as the "intersection of assets, threats, and vulnerabilities" [1]. This research project aims to experimentally compare the similarity-based ranking of cyber security information utilising a recommendation environment. The Memory-Based Collaborative Filtering technique was employed, specifically the User-Based and Item-Based approaches. These systems utilised information from the National Vulnerability Database, specifically for the identification and similarity-based ranking of cyber-security vulnerability information, relating to hardware and software applications. Experiments were performed using the Item-Based technique, to identify the optimum system parameters, evaluated through the AUC evaluation metric. Once identified, the Item-Based technique was compared with the User-Based technique which utilised the parameters identified from the previous experiments. During these experiments, the Pearson's Correlation Coefficient and the Cosine similarity measure was used. From these experiments, it was identified that utilised the Item-Based technique which employed the Cosine similarity measure, an AUC evaluation metric of 0.80225 was achieved.

2019-09-26
Pang, Chengbin, Du, Yunlan, Mao, Bing, Guo, Shanqing.  2018.  Mapping to Bits: Efficiently Detecting Type Confusion Errors. Proceedings of the 34th Annual Computer Security Applications Conference. :518-528.

The features of modularity and inheritance in C++ facilitate the developers' usage, but also give rise to the problem of type confusion. As an ancestor class may have a different data layout from its descendant class, a dangerous downcasting operation from the ancestor to its descendant can lead to a critical attack, such as control flow hijacking, out-of-bounds access to neighbor memory area, etc. As reported in CVE, such vulnerabilities have been found in various common-used software, including Google Chrome, Firefox and Adobe Flash Player, and have a trend of increase in recent years. The urgency of addressing type confusion problems quickens the pace of researchers coming to corresponding solutions. However, the existing works either handle the problem partially, or suffer from the high performance and memory overhead, especially to the large-scale projects. We present Bitype to check the validity explicitly when a type is downcasting to another, maintaining high coverage and reducing overhead and compilation time massively. The core of our design is a Safe Encoding Scheme, which encodes all of the classes by mapping them to bits. With this scheme, Bitype treats the classes and their safe convertible classes as codes and verifies typecastings in an xor operation, both decreasing the performance overhead of check and the memory overhead. Besides, we implement a Clang Tool to avoid the repeated collection of inheritance relationships and deploy a two-level lookup table to trace objects efficiently. Evaluated on SPEC CPU2006 benchmarks and Firefox browser, Bitype shows a slightly higher coverage of typecasting compared to the state-of-the-art HexType[22], but reduces the performance overhead by 2 to 16 times, the memory overhead by 2 to 3 times, the compilation time by 21 to 223 times. As a result, our solution is a practical and efficient typecasting checker for commodity software.