Biblio
Security has always been a major issue in cloud. Data sources are the most valuable and vulnerable information which is aimed by attackers to steal. If data is lost, then the privacy and security of every cloud user are compromised. Even though a cloud network is secured externally, the threat of an internal attacker exists. Internal attackers compromise a vulnerable user node and get access to a system. They are connected to the cloud network internally and launch attacks pretending to be trusted users. Machine learning approaches are widely used for cloud security issues. The existing machine learning based security approaches classify a node as a misbehaving node based on short-term behavioral data. These systems do not differentiate whether a misbehaving node is a malicious node or a broken node. To address this problem, this paper proposes an Improvised Long Short-Term Memory (ILSTM) model which learns the behavior of a user and automatically trains itself and stores the behavioral data. The model can easily classify the user behavior as normal or abnormal. The proposed ILSTM not only identifies an anomaly node but also finds whether a misbehaving node is a broken node or a new user node or a compromised node using the calculated trust factor. The proposed model not only detects the attack accurately but also reduces the false alarm in the cloud network.
Mission assurance requires effective, near-real time defensive cyber operations to appropriately respond to cyber attacks, without having a significant impact on operations. The ability to rapidly compute, prioritize and execute network-based courses of action (CoAs) relies on accurate situational awareness and mission-context information. Although diverse solutions exist for automatically collecting and analysing infrastructure data, few deliver automated analysis and implementation of network-based CoAs in the context of the ongoing mission. In addition, such processes can be operatorintensive and available tools tend to be specific to a set of common data sources and network responses. To address these issues, Defence Research and Development Canada (DRDC) is leading the development of the Automated Computer Network Defence (ARMOUR) technology demonstrator and cyber defence science and technology (S&T) platform. ARMOUR integrates new and existing off-the-shelf capabilities to provide enhanced decision support and to automate many of the tasks currently executed manually by network operators. This paper describes the cyber defence integration framework, situational awareness, and automated mission-oriented decision support that ARMOUR provides.
In a continually evolving cyber-threat landscape, the detection and prevention of cyber attacks has become a complex task. Technological developments have led organisations to digitise the majority of their operations. This practice, however, has its perils, since cybespace offers a new attack-surface. Institutions which are tasked to protect organisations from these threats utilise mainly network data and their incident response strategy remains oblivious to the needs of the organisation when it comes to protecting operational aspects. This paper presents a system able to combine threat intelligence data, attack-trend data and organisational data (along with other data sources available) in order to achieve automated network-defence actions. Our approach combines machine learning, visual analytics and information from business processes to guide through a decision-making process for a Security Operation Centre environment. We test our system on two synthetic scenarios and show that correlating network data with non-network data for automated network defences is possible and worth investigating further.
Open Science Big Data is emerging as an important area of research and software development. Although there are several high quality frameworks for Big Data, additional capabilities are needed for Open Science Big Data. These include data provenance, citable reusable data, data sources providing links to research literature, relationships to other data and theories, transparent analysis/reproducibility, data privacy, new optimizations/advanced algorithms, data curation, data storage and transfer. An important part of science is explanation of results, ideally leading to theory formation. In this paper, we examine means for supporting the use of theory in big data analytics as well as using big data to assist in theory formation. One approach is to fit data in a way that is compatible with some theory, existing or new. Functional Data Analysis allows precise fitting of data as well as penalties for lack of smoothness or even departure from theoretical expectations. This paper discusses principal differential analysis and related techniques for fitting data where, for example, a time-based process is governed by an ordinary differential equation. Automation in theory formation is also considered. Case studies in the fields of computational economics and finance are considered.
Using heterogeneous clouds has been considered to improve performance of big-data analytics for healthcare platforms. However, the problem of the delay when transferring big-data over the network needs to be addressed. The purpose of this paper is to analyze and compare existing cloud computing environments (PaaS, IaaS) in order to implement middleware services. Understanding the differences and similarities between cloud technologies will help in the interconnection of healthcare platforms. The paper provides a general overview of the techniques and interfaces for cloud computing middleware services, and proposes a cloud architecture for healthcare. Cloud middleware enables heterogeneous devices to act as data sources and to integrate data from other healthcare platforms, but specific APIs need to be developed. Furthermore, security and management problems need to be addressed, given the heterogeneous nature of the communication and computing environment. The present paper fills a gap in the electronic healthcare register literature by providing an overview of cloud computing middleware services and standardized interfaces for the integration with medical devices.
Complexity is ever increasing within our information environment and organisations, as interdependent dynamic relationships within sociotechnical systems result in high variety and uncertainty from a lack of information or control. A net-centric approach is a strategy to improve information value, to enable stakeholders to extend their reach to additional data sources, share Situational Awareness (SA), synchronise effort and optimise resource use to deliver maximum (or proportionate) effect in support of goals. This paper takes a systems perspective to understand the dynamics within a net-centric information system. This paper presents the first stages of the Soft Systems Methodology (SSM), to develop a conceptual model of the human activity system and develop a system dynamics model to represent system behaviour, that will inform future research into a net-centric approach with information security. Our model supports the net-centric hypothesis that participation within a information sharing community extends information reach, improves organisation SA allowing proactive action to mitigate vulnerabilities and reduce overall risk within the community. The system dynamics model provides organisations with tools to better understand the value of a net-centric approach, a framework to determine their own maturity and evaluate strategic relationships with collaborative communities.
Data is one of the most valuable assets for organization. It can facilitate users or organizations to meet their diverse goals, ranging from scientific advances to business intelligence. Due to the tremendous growth of data, the notion of big data has certainly gained momentum in recent years. Cloud computing is a key technology for storing, managing and analyzing big data. However, such large, complex, and growing data, typically collected from various data sources, such as sensors and social media, can often contain personally identifiable information (PII) and thus the organizations collecting the big data may want to protect their outsourced data from the cloud. In this paper, we survey our research towards development of efficient and effective privacy-enhancing (PE) techniques for management and analysis of big data in cloud computing.We propose our initial approaches to address two important PE applications: (i) privacy-preserving data management and (ii) privacy-preserving data analysis under the cloud environment. Additionally, we point out research issues that still need to be addressed to develop comprehensive solutions to the problem of effective and efficient privacy-preserving use of data.
In recent years, Wireless Sensor Networks (WSNs) have become valuable assets to both the commercial and military communities with applications ranging from industrial control on a factory floor to reconnaissance of a hostile border. A typical WSN topology that applies to most applications allows sensors to act as data sources that forward their measurements to a central sink or base station (BS). The unique role of the BS makes it a natural target for an adversary that desires to achieve the most impactful attack possible against a WSN. An adversary may employ traffic analysis techniques such as evidence theory to identify the BS based on network traffic flow even when the WSN implements conventional security mechanisms. This motivates a need for WSN operators to achieve improved BS anonymity to protect the identity, role, and location of the BS. Many traffic analysis countermeasures have been proposed in literature, but are typically evaluated based on data traffic only, without considering the effects of network synchronization on anonymity performance. In this paper we use evidence theory analysis to examine the effects of WSN synchronization on BS anonymity by studying two commonly used protocols, Reference Broadcast Synchronization (RBS) and Timing-synch Protocol for Sensor Networks (TPSN).