Biblio
Cloud computing (CC) systems prevail to be the widespread computational paradigms for offering immense scalable and elastic services. Computing resources in cloud environment should be scheduled to facilitate the providers to utilize the resources moreover the users could get low cost applications. The most prominent need in job scheduling is to ensure Quality of service (QoS) to the user. In the boundary of the third party the scheduling takes place hence it is a significant condition for assuring its security. The main objective of our work is to offer QoS i.e. cost, makespan, minimized migration of task with security enforcement moreover the proposed algorithm guarantees that the admitted requests are executed without violating service level agreement (SLA). These objectives are attained by the proposed Fuzzy Ant Bee Colony algorithm. The experimental outcome confirms that secured job scheduling objective with assured QoS is attained by the proposed algorithm.
With the advancements in technology, the ease of interconnectedness among devices has increased manifold, leading to the widespread usage of Internet of Things. Internet of Things has also reached our homes, often referred to as domestic Internet of Things. However, the security aspect of domestic Internet of Things has largely been under question as the increase in inter-device communication renders the system more vulnerable to adversaries. Largely popular blockchain technology is being extensively researched for integration into the Internet of Things framework in order to improve the security aspect of the framework. Blockchain, being a cryptographically linked set of data, has a few barriers which prevent it from being successfully integrated to Internet of Things. One of the major barrier is the high computational requirements and time latency associated with it. This work tries to address this research gap and proposes a novel scalable blockchain optimization for domestic Internet of Things. The proposed blockchain model uses a flow based filtering technique as an added security layer to facilitate the scenario. This work then evaluates the performance of the proposed model in various scenarios and compares it with that of traditional blockchain. The work presents a largely encompassing evaluation, explanation and assessment of the proposed model.
This work considers the trade-off between security and performance when revealing partial information about encrypted data computed on. The focus of our work is on information revealed through control flow side-channels when executing programs on encrypted data. We use quantitative information flow to measure security, running time to measure performance and program transformation techniques to alter the trade-off between the two. Combined with information flow policies, we perform a policy-aware security and performance trade-off (PASAPTO) analysis. We formalize the problem of PASAPTO analysis as an optimization problem, prove the NP-hardness of the corresponding decision problem and present two algorithms solving it heuristically. We implemented our algorithms and combined them with the Dataflow Authentication (DFAuth) approach for outsourcing sensitive computations. Our DFAuth Trade-off Analyzer (DFATA) takes Java Bytecode operating on plaintext data and an associated information flow policy as input. It outputs semantically equivalent program variants operating on encrypted data which are policy-compliant and approximately Pareto-optimal with respect to leakage and performance. We evaluated DFATA in a commercial cloud environment using Java programs, e.g., a decision tree program performing machine learning on medical data. The decision tree variant with the worst performance is 357% slower than the fastest variant. Leakage varies between 0% and 17% of the input.
Development of quality object-oriented software contains security as an integral aspect of that process. During that process, a ceaseless burden on the developers was posed in order to maximize the development and at the same time to reduce the expense and time invested in security. In this paper, the authors analyzed metrics for object-oriented software in order to evaluate and identify the relation between metric value and security of the software. Identification of these relations was achieved by study of software vulnerabilities with code level metrics. By using OWASP classification of vulnerabilities and experimental results, we proved that there was relation between metric values and possible security issues in software. For experimental code analysis, we have developed special software called SOFTMET.
The usage of connected devices and their role within our daily- and business life gains more and more impact. In addition, various derivations of Cyber-Physical Systems (CPS) reach new business fields, like smart healthcare or Industry 4.0. Although these systems do bring many advantages for users by extending the overall functionality of existing systems, they come with several challenges, especially for system engineers and architects. One key challenge consists in achieving a sufficiently high level of security within the CPS environment, as sensitive data or safety-critical functions are often integral parts of CPS. Being system of systems (SoS), CPS complexity, unpredictability and heterogeneity complicate analyzing the overall level of security, as well as providing a way to detect ongoing attacks. Usually, security metrics and frameworks provide an effective tool to measure the level of security of a given component or system. Although several comprehensive surveys exist, an assessment of the effectiveness of the existing solutions for CPS environments is insufficiently investigated in literature. In this work, we address this gap by benchmarking a carefully selected variety of existing security metrics in terms of their usability for CPS. Accordingly, we pinpoint critical CPS challenges and qualitatively assess the effectiveness of the existing metrics for CPS systems.
with the advent of Cloud Computing a new era of computing has come into existence. No doubt, there are numerous advantages associated with the Cloud Computing but, there is other side of the picture too. The challenges associated with it need a more promising reply as far as the security of data that is stored, in process and in transit is concerned. This paper put forth a cloud computing model that tries to answer the data security queries; we are talking about, in terms of the four cryptographic techniques namely Homomorphic Encryption (HE), Verifiable Computation (VC), Secure Multi-Party Computation (SMPC), Functional Encryption (FE). This paper takes into account the various cryptographic techniques to undertake cloud computing security issues. It also surveys these important (existing) cryptographic tools/techniques through a proposed Cloud computation model that can be used for Big Data applications. Further, these cryptographic tools are also taken into account in terms of CIA triad. Then, these tools/techniques are analyzed by comparing them on the basis of certain parameters of concern.
Data mining visualization is an important aspect of big data visualization and analysis. The impact of the nature-inspired algorithm along with the impact of computing traditions for the complete visualization of the storage and data communication needs have been studied. This paper also explores the possibilities of the hybridization of data mining in terms of association of cloud computing. It also explores the data analytical view in the exploration of these approaches in terms of data storage in big data. Based on these aspects the methodological advancement along with the problem statements has been analyzed. This will help in the exploration of computational capability along with the new insights in this domain.
Big data provides a way to handle and analyze large amount of data or complex set. It provides a systematic extraction also. In this paper a hybrid security analysis based on intelligent adaptive learning in big data has been discussed with the current trends. This paper also explores the possibility of cloud computing collaboration with big data. The advantages along with the impact for the overall platform evaluation has been discussed with the traditional trends. It has been useful in the analysis and the exploration of future research. This discussion also covers the computational variability and the connotation in terms of data reliability, availability and management in big data with data security aspects.
Machine learning algorithms used to detect attacks are limited by the fact that they cannot incorporate the back-ground knowledge that an analyst has. This limits their suitability in detecting new attacks. Reinforcement learning is different from traditional machine learning algorithms used in the cybersecurity domain. Compared to traditional ML algorithms, reinforcement learning does not need a mapping of the input-output space or a specific user-defined metric to compare data points. This is important for the cybersecurity domain, especially for malware detection and mitigation, as not all problems have a single, known, correct answer. Often, security researchers have to resort to guided trial and error to understand the presence of a malware and mitigate it.In this paper, we incorporate prior knowledge, represented as Cybersecurity Knowledge Graphs (CKGs), to guide the exploration of an RL algorithm to detect malware. CKGs capture semantic relationships between cyber-entities, including that mined from open source. Instead of trying out random guesses and observing the change in the environment, we aim to take the help of verified knowledge about cyber-attack to guide our reinforcement learning algorithm to effectively identify ways to detect the presence of malicious filenames so that they can be deleted to mitigate a cyber-attack. We show that such a guided system outperforms a base RL system in detecting malware.
A metric and structure of computing 2020 is proposed in the form of Top 12 Technology Trends, which will influence on investment in science, education and industry in developing countries. The primary social and technological problem of the protection of society and critical facilities through the creation of Global Intelligent Cyber Security is formulated. The axioms of the constructive formation of developing countries on the basis of the adoption of moral relations are formulated. Models, methods and algorithms of cyber-social computing are proposed that are focused on processing big data, searching for keywords and test fragments. New characteristic equations of similarity - differences between the processes and phenomena are synthesized for the exact information retrieval by keywords in cyber-physical space. A computing model of the development of the Universe is formulated, where the binary interactions of entities and forms are harmonic functions of the phase state. A structure of interactive computing of the creative process based on a metric assessment of the development status with world achievements is proposed.
In the past decades, learning an effective distance metric between pairs of instances has played an important role in the classification and retrieval task, for example, the person identification or malware retrieval in the IoT service. The core motivation of recent efforts focus on improving the metric forms, and already showed promising results on the various applications. However, such models often fail to produce a reliable metric on the ambiguous test set. It happens mainly due to the sampling process of the training set, which is not representative of the distribution of the negative samples, especially the examples that are closer to the boundary of different categories (also called hard negative samples). In this paper, we focus on addressing such problems and propose an adaptive margin deep adversarial metric learning (AMDAML) framework. It exploits numerous common negative samples to generate potential hard (adversarial) negatives and applies them to facilitate robust metric learning. Apart from the previous approaches that typically depend on the search or data augmentation to find hard negative samples, the generation of adversarial negative instances could avoid the limitation of domain knowledge and constraint pairs' amount. Specifically, in order to prevent over fitting or underfitting during the training step, we propose an adaptive margin loss that preserves a flexible margin between the negative (include the adversarial and original) and positive samples. We simultaneously train both the adversarial negative generator and conventional metric objective in an adversarial manner and learn the feature representations that are more precise and robust. The experimental results on practical data sets clearly demonstrate the superiority of AMDAML to representative state-of-the-art metric learning models.
Data value (DV) is a novel concept that is introduced as one of the Big Data phenomenon features. While continuing an investigation of the DV ontology and its relationship with the data quality (DQ) on the conceptual level, this paper researches possible applications and use of the DV in the practical design of security and privacy protection systems and tools. We present a novel approach to DV evaluation that maps DQ metrics into DV value. Developed methods allow DV and DQ use in a wide range of application domains. To demonstrate DQ and DV concept employment in real tasks we present two real-life scenarios. The first use case demonstrates the DV use in crowdsensing application design. It shows up how DV can be calculated by integrating various metrics characterizing data application functionality, accuracy, and security. The second one incorporates the privacy consideration into DV calculus by exploring the relationship between privacy, DQ, and DV in the defense against web-site fingerprinting in The Onion Router (TOR) networks. These examples demonstrate how our methods of the DV and DQ evaluation may be employed in the design of real systems with security and privacy consideration.
Initially, legitimate users were working under a normal web browser to do all activities over the internet [1]. To get more secure service and to get protection against Bot activity, the legitimate users switched their activity from Normal web browser to low latency anonymous communication such as Tor Browser. The Traffic monitoring in Tor Network is difficult as the packets are traveling from source to destination in an encrypted fashion and the Tor network hides its identity from destination. But lately, even the illegitimate users such as attackers/criminals started their activity on the Tor browser. The secured Tor network makes the detection of Botnet more difficult. The existing tools for botnet detection became inefficient against Tor-based bots because of the features of the Tor browser. As the Tor Browser is highly secure and because of the ethical issues, doing practical experiments on it is not advisable which could affect the performance and functionality of the Tor browser. It may also affect the endanger users in situations where the failure of Tor's anonymity has severe consequences. So, in the proposed research work, Private Tor Networks (PTN) on physical or virtual machines with dedicated resources have been created along with Trusted Middle Node. The motivation behind the trusted middle node is to make the Private Tor network more efficient and to increase its performance.