Biblio
Nowadays, private corporations and public institutions are dealing with constant and sophisticated cyberthreats and cyberattacks. As a general warning, organizations must build and develop a cybersecurity culture and awareness in order to defend against cybercriminals. Information Technology (IT) and Information Security (InfoSec) audits that were efficient in the past, are trying to converge into cybersecurity audits to address cyber threats, cyber risks and cyberattacks that evolve in an aggressive cyber landscape. However, the increase in number and complexity of cyberattacks and the convoluted cyberthreat landscape is challenging the running cybersecurity audit models and putting in evidence the critical need for a new cybersecurity audit model. This article reviews the best practices and methodologies of global leaders in the cybersecurity assurance and audit arena. By means of the analysis of the current approaches and theoretical background, their real scope, strengths and weaknesses are highlighted looking forward a most efficient and cohesive synthesis. As a resut, this article presents an original and comprehensive cybersecurity audit model as a proposal to be utilized for conducting cybersecurity audits in organizations and Nation States. The CyberSecurity Audit Model (CSAM) evaluates and validates audit, preventive, forensic and detective controls for all organizational functional areas. CSAM has been tested, implemented and validated along with the Cybersecurity Awareness TRAining Model (CATRAM) in a Canadian higher education institution. A research case study is being conducted to validate both models and the findings will be published accordingly.
Deep neural network (DNN) has demonstrated its success in multiple domains. However, DNN models are inherently vulnerable to adversarial examples, which are generated by adding adversarial perturbations to benign inputs to fool the DNN model to misclassify. In this paper, we present a cross-layer strategic ensemble framework and a suite of robust defense algorithms, which are attack-independent, and capable of auto-repairing and auto-verifying the target model being attacked. Our strategic ensemble approach makes three original contributions. First, we employ input-transformation diversity to design the input-layer strategic transformation ensemble algorithms. Second, we utilize model-disagreement diversity to develop the output-layer strategic model ensemble algorithms. Finally, we create an input-output cross-layer strategic ensemble defense that strengthens the defensibility by combining diverse input transformation based model ensembles with diverse output verification model ensembles. Evaluated over 10 attacks on ImageNet dataset, we show that our strategic ensemble defense algorithms can achieve high defense success rates and are more robust with high attack prevention success rates and low benign false negative rates, compared to existing representative defenses.
In January 2017 encrypted Internet traffic surpassed non-encrypted traffic. Although encryption increases security, it also masks intrusions and attacks by blocking the access to packet contents and traffic features, therefore making data analysis unfeasible. In spite of the strong effect of encryption, its impact has been scarcely investigated in the field. In this paper we study how encryption affects flow feature spaces and machine learning-based attack detection. We propose a new cross-layer feature vector that simultaneously represents traffic at three different levels: application, conversation, and endpoint behavior. We analyze its behavior under TLS and IPSec encryption and evaluate the efficacy with recent network traffic datasets and by using Random Forests classifiers. The cross-layer multi-key approach shows excellent attack detection in spite of TLS encryption. When IPsec is applied, the reduced variant obtains satisfactory detection for botnets, yet considerable performance drops for other types of attacks. The high complexity of network traffic is unfeasible for monolithic data analysis solutions, therefore requiring cross-layer analysis for which the multi-key vector becomes a powerful profiling core.
Now-a-days web applications are everywhere. Usually these applications are developed by database program which are often written in popular host programming languages such as C, C++, C\#, Java, etc., with embedded Structured Query Language (SQL). These applications are used to access and process crucial data with the help of Database Management System (DBMS). Preserving the sensitive data from any kind of attacks is one of the prime factors that needs to be maintained by the web applications. The SQL injection attacks is one of the important security threat for the web applications. In this paper, we propose a code-based analysis approach to automatically detect and prevent the possible SQL Injection Attacks (SQLIA) in a query before submitting it to the underlying database. This approach analyses the user input by assigning a complex number to each input element. It has two part (i) input clustering and (ii) safe (non-malicious) input identification. We provide a details discussion of the proposal w.r.t the literature on security and execution overhead point of view.
The dynamicity and complexity of clouds highlight the importance of automated root cause analysis solutions for explaining what might have caused a security incident. Most existing works focus on either locating malfunctioning clouds components, e.g., switches, or tracing changes at lower abstraction levels, e.g., system calls. On the other hand, a management-level solution can provide a big picture about the root cause in a more scalable manner. In this paper, we propose DOMINOCATCHER, a novel provenance-based solution for explaining the root cause of security incidents in terms of management operations in clouds. Specifically, we first define our provenance model to capture the interdependencies between cloud management operations, virtual resources and inputs. Based on this model, we design a framework to intercept cloud management operations and to extract and prune provenance metadata. We implement DOMINOCATCHER on OpenStack platform as an attached middleware and validate its effectiveness using security incidents based on real-world attacks. We also evaluate the performance through experiments on our testbed, and the results demonstrate that DOMINOCATCHER incurs insignificant overhead and is scalable for clouds.
Internet Service Providers (ISPs) have an economic and operational interest in detecting malicious network activity relating to their subscribers. However, it is unclear what kind of traffic data an ISP has available for cyber-security research, and under which legal conditions it can be used. This paper gives an overview of the challenges posed by legislation and of the data sources available to a European ISP. DNS and NetFlow logs are identified as relevant data sources and the state of the art in anonymization and fingerprinting techniques is discussed. Based on legislation, data availability and privacy considerations, a practically applicable anonymization policy is presented.
Measuring software complexity is key in managing the software lifecycle and in controlling its maintenance. While there are well-established and comprehensive metrics to measure the complexity of the software code, assessment of the complexity of software designs remains elusive. Moreover, there are no clear guidelines to help software designers chose alternatives that reduce design complexity, improve design comprehensibility, and improve the maintainability of the software. This paper outlines a language independent approach to measuring software design complexity using objective and deterministic metrics. The paper outlines the metrics for two major software design notations; UML Class Diagrams and UML State Machines. The approach is based on the analysis of the design elements and their mutual interactions. The approach can be extended to cover other UML design notations.
with the advent of Cloud Computing a new era of computing has come into existence. No doubt, there are numerous advantages associated with the Cloud Computing but, there is other side of the picture too. The challenges associated with it need a more promising reply as far as the security of data that is stored, in process and in transit is concerned. This paper put forth a cloud computing model that tries to answer the data security queries; we are talking about, in terms of the four cryptographic techniques namely Homomorphic Encryption (HE), Verifiable Computation (VC), Secure Multi-Party Computation (SMPC), Functional Encryption (FE). This paper takes into account the various cryptographic techniques to undertake cloud computing security issues. It also surveys these important (existing) cryptographic tools/techniques through a proposed Cloud computation model that can be used for Big Data applications. Further, these cryptographic tools are also taken into account in terms of CIA triad. Then, these tools/techniques are analyzed by comparing them on the basis of certain parameters of concern.
With the advancement of computing and communication technologies, data transmission in the internet are getting bigger and faster. However, it is necessary to secure the data to prevent fraud and criminal over the internet. Furthermore, most of the data related to statistics requires to be analyzed securely such as weather data, health data, financial and other services. This paper presents an implementation of cloud security using homomorphic encryption for data analytic in the cloud. We apply the homomorphic encryption that allows the data to be processed without being decrypted. Experimental results show that, for the polynomial degree 26, 28, and 210, the total executions are 2.2 ms, 4.4 ms, 25 ms per data, respectively. The implementation is useful for big data security such as for environment, financial and hospital data analytics.
In cyber physical systems, cybersecurity and data privacy are among most critical considerations when dealing with communications, processing, and storage of data. Geospatial data and medical data are examples of big data that require seamless integration with computational algorithms as outlined in Industry 4.0 towards adoption of fourth industrial revolution. Healthcare Industry 4.0 is an application of the design principles of Industry 4.0 to the medical domain. Mobile applications are now widely used to accomplish important business functions in almost all industries. These mobile devices, however, are resource poor and proved insufficient for many important medical applications. Resource rich cloud services are used to augment poor mobile device resources for data and compute intensive applications in the mobile cloud computing paradigm. However, the performance of cloud services is undesirable for data-intensive, latency-sensitive mobile applications due increased hop count between the mobile device and the cloud server. Cloudlets are virtual machines hosted in server placed nearby the mobile device and offer an attractive alternative to the mobile cloud computing in the form of mobile edge computing. This paper outlines cybersecurity and data privacy aspects for communications of measured patient data from wearable wireless biosensors to nearby cloudlet host server in order to facilitate the cloudlet based preliminary and essential complex analytics for the medical big data.