Biblio
In the era of Big Data, software systems can be affected by its growing complexity, both with respect to functional and non-functional requirements. As more and more people use software applications over the web, the ability to recognize if some of this traffic is malicious or legitimate is a challenge. The traffic load of security controllers, as well as the complexity of security rules to detect attacks can grow to levels where current solutions may not suffice. In this work, we propose a hierarchical distributed architecture for security control in order to partition responsibility and workload among many security controllers. In addition, our architecture proposes a more simplified way of defining security rules to allow security to be enforced on an operational level, rather than a development level.
As an extension of cloud computing, fog computing is proving itself more and more potentially useful nowadays. Fog computing is introduced to overcome the shortcomings of cloud computing paradigm in handling the massive amount of traffic caused by the enormous number of Internet of Things devices being increasingly connected to the Internet on daily basis. Despite its advantages, fog architecture introduces new security and privacy threats that need to be studied and solved as soon as possible. In this work, we explore two privacy issues posed by the fog computing architecture and we define privacy challenges according to them. The first challenge is related to the fog's design purposes of reducing the latency and improving the bandwidth, where the existing privacy-preserving methods violate these design purposed. The other challenge is related to the proximity of fog nodes to the end-users or IoT devices. We discuss the importance of addressing these challenges by putting them in the context of real-life scenarios. Finally, we propose a privacy-preserving fog computing paradigm that solves these challenges and we assess the security and efficiency of our solution.
Detecting software security vulnerabilities and distinguishing vulnerable from non-vulnerable code is anything but simple. Most of the time, vulnerabilities remain undisclosed until they are exposed, for instance, by an attack during the software operational phase. Software metrics are widely-used indicators of software quality, but the question is whether they can be used to distinguish vulnerable software units from the non-vulnerable ones during development. In this paper, we perform an exploratory study on software metrics, their interdependency, and their relation with security vulnerabilities. We aim at understanding: i) the correlation between software architectural characteristics, represented in the form of software metrics, and the number of vulnerabilities; and ii) which are the most informative and discriminative metrics that allow identifying vulnerable units of code. To achieve these goals, we use, respectively, correlation coefficients and heuristic search techniques. Our analysis is carried out on a dataset that includes software metrics and reported security vulnerabilities, exposed by security attacks, for all functions, classes, and files of five widely used projects. Results show: i) a strong correlation between several project-level metrics and the number of vulnerabilities, ii) the possibility of using a group of metrics, at both file and function levels, to distinguish vulnerable and non-vulnerable code with a high level of accuracy.
To overcome the current cybersecurity challenges of protecting our cyberspace and applications, we present an innovative cloud-based architecture to offer resilient Dynamic Data Driven Application Systems (DDDAS) as a cloud service that we refer to as resilient DDDAS as a Service (rDaaS). This architecture integrates Service Oriented Architecture (SOA) and DDDAS paradigms to offer the next generation of resilient and agile DDDAS-based cyber applications, particularly convenient for critical applications such as Battle and Crisis Management applications. Using the cloud infrastructure to offer resilient DDDAS routines and applications, large scale DDDAS applications can be developed by users from anywhere and by using any device (mobile or stationary) with the Internet connectivity. The rDaaS provides transformative capabilities to achieve superior situation awareness (i.e., assessment, visualization, and understanding), mission planning and execution, and resilient operations.
Blockchain is an emerging technology for decentralized and transactional data sharing across a large network of untrusted participants. It enables new forms of distributed software architectures, where components can find agreements on their shared states without trusting a central integration point or any particular participating components. Considering the blockchain as a software connector helps make explicitly important architectural considerations on the resulting performance and quality attributes (for example, security, privacy, scalability and sustainability) of the system. Based on our experience in several projects using blockchain, in this paper we provide rationales to support the architectural decision on whether to employ a decentralized blockchain as opposed to other software solutions, like traditional shared data storage. Additionally, we explore specific implications of using the blockchain as a software connector including design trade-offs regarding quality attributes.
Processing smart grid data for analytics purposes brings about a series of privacy-related risks. In order to allow for the most suitable mitigation strategies, reasonable privacy risks need to be addressed by taking into consideration the perspective of each smart grid stakeholder separately. In this context, we use the notion of privacy concerns to reflect potential privacy risks from the perspective of different smart grid stakeholders. Privacy concerns help to derive privacy goals, which we represent using the goals structuring notation. Thus represented goals can more comprehensibly be addressed through technical and non-technical strategies and solutions. The thread of argumentation - from concerns to goals to strategies and solutions - is presented in form of a privacy case, which is analogous to the safety case used in the automotive domain. We provide an exemplar privacy case for the smart grid developed as part of the Aspern Smart City Research project.
Modern frameworks are required to be extendable as well as secure. However, these two qualities are often at odds. In this poster we describe an approach that uses a combination of static analysis and run-time management, based on software architecture models, that can improve security while maintaining framework extendability. We implement a prototype of the approach for the Android platform. Static analysis identifies the architecture and communication patterns among the collection of apps on an Android device and which communications might be vulnerable to attack. Run-time mechanisms monitor these potentially vulnerable communication patterns, and adapt the system to either deny them, request explicit approval from the user, or allow them.
Conventional security mechanisms at network, host, and source code levels are no longer sufficient in detecting and responding to increasingly dynamic and sophisticated cyber threats today. Detecting anomalous behavior at the architectural level can help better explain the intent of the threat and strengthen overall system security posture. To that end, we present a framework that mines software component interactions from system execution history and applies a detection algorithm to identify anomalous behavior. The framework uses unsupervised learning at runtime, can perform fast anomaly detection “on the fly”, and can quickly adapt to system load fluctuations and user behavior shifts. Our evaluation of the approach against a real Emergency Deployment System has demonstrated very promising results, showing the framework can effectively detect covert attacks, including insider threats, that may be easily missed by traditional intrusion detection methods.
Federated cloud networks are formed by federating virtual network segments from different clouds, e.g. in a hybrid cloud, into a single federated network. Such networks should be protected with a global federated cloud network security policy. The availability of network function virtualisation and service function chaining in cloud platforms offers an opportunity for implementing and enforcing global federated cloud network security policies. In this paper we describe an approach for enforcing global security policies in federated cloud networks. The approach relies on a service manifest that specifies the global network security policy. From this manifest configurations of the security functions for the different clouds of the federation are generated. This enables automated deployment and configuration of network security functions across the different clouds. The approach is illustrated with a case study where communications between trusted and untrusted clouds, e.g. public clouds, are encrypted. The paper discusses future work on implementing this architecture for the OpenStack cloud platform with the service function chaining API.
Utility networks are part of every nation's critical infrastructure, and their protection is now seen as a high priority objective. In this paper, we propose a threat awareness architecture for critical infrastructures, which we believe will raise security awareness and increase resilience in utility networks. We first describe an investigation of trends and threats that may impose security risks in utility networks. This was performed on the basis of a viewpoint approach that is capable of identifying technical and non-technical issues (e.g., behaviour of humans). The result of our analysis indicated that utility networks are affected strongly by technological trends, but that humans comprise an important threat to them. This provided evidence and confirmed that the protection of utility networks is a multi-variable problem, and thus, requires the examination of information stemming from various viewpoints of a network. In order to accomplish our objective, we propose a systematic threat awareness architecture in the context of a resilience strategy, which ultimately aims at providing and maintaining an acceptable level of security and safety in critical infrastructures. As a proof of concept, we demonstrate partially via a case study the application of the proposed threat awareness architecture, where we examine the potential impact of attacks in the context of social engineering in a European utility company.
The purpose of this research is to propose architecture-driven, penetration testing equipped with a software reverse and forward engineering process. Although the importance of architectural risk analysis has been emphasized in software security, no methodology is shown to answer how to discover the architecture and abuse cases of a given insecure legacy system and how to modernize it to a secure target system. For this purpose, we propose an architecture-driven penetration testing methodology: 4+1 architectural views of the given insecure legacy system, documented to discover program paths for vulnerabilities through a reverse engineering process. Then, vulnerabilities are identified by using the discovered architecture abuse cases and countermeasures are proposed on identified vulnerabilities. As a case study, a telecommunication company's Identity Access Management (IAM) system is used for discovering its software architecture, identifying the vulnerabilities of its architecture, and providing possible countermeasures. Our empirical results show that functional suggestions would be relatively easier to follow up and less time-consuming work to fix; however, architectural suggestions would be more complicated to follow up, even though it would guarantee better security and take full advantage of OAuth 2.0 supporting communities.
The digital forensics refers to the application of scientific techniques in investigation of a crime, specifically to identify or validate involvement of some suspect in an activity leading towards that crime. Network forensics particularly deals with the monitoring of network traffic with an aim to trace some suspected activity from normal traffic or to identify some abnormal pattern in the traffic that may give clue towards some attack. Network forensics, quite valuable phenomenon in investigation process, presents certain challenges including problems in accessing network devices of cloud architecture, handling large amount network traffic, and rigorous processing required to analyse the huge volume of data, of which large proportion may prove to be irrelevant later on. Cloud Computing technology offers services to its clients remotely from a shared pool of resources, as per clients customized requirement, any time, from anywhere. Cloud Computing has attained tremendous popularity recently, leading to its vast and rapid deployment, however Privacy and Security concerns have also increased in same ratio, since data and application is outsourced to a third party. Security concerns about cloud architecture have come up as the prime barrier hindering the major shift of industry towards cloud model, despite significant advantages of cloud architecture. Cloud computing architecture presents aggravated and specific challenges in the network forensics. In this paper, I have reviewed challenges and issues faced in conducting network forensics particularly in the cloud computing environment. The study covers limitations that a network forensic expert may confront during investigation in cloud environment. I have categorized challenges presented to network forensics in cloud computing into various groups. Challenges in each group can be handled appropriately by either Forensic experts, Cloud service providers or Forensic tools whereas leftover challenges are declared as be- ond the control.
Based on the analysis relationships of challenger and attestation in remote attestation process, we propose a dynamic remote attestation model based on concerns. By combines the trusted root and application of dynamic credible monitoring module, Convert the Measurement for all load module of integrity measurement architecture into the Attestation of the basic computing environments, dynamic credible monitoring module, and request service software module. Discuss the rationality of the model. The model used Merkel hash tree to storage applications software integrity metrics, both to protect the privacy of the other party application software, and also improves the efficiency of remote attestation. Experimental prototype system shows that the model can verify the dynamic behavior of the software, to make up for the lack of static measure.
Information Technology experts cite security and privacy concerns as the major challenges in the adoption of cloud computing. On Platform-as-a-Service (PaaS) clouds, customers are faced with challenges of selecting service providers and evaluating security implementations based on their security needs and requirements. This study aims to enable cloud customers the ability to quantify their security requirements in order to identify critical areas in PaaS cloud architectures were security provisions offered by CSPs could be assessed. With the use of an adaptive security mapping matrix, the study uses a quantitative approach to presents findings of numeric data that shows critical architectures within the PaaS environment where security can be evaluated and security controls assessed to meet these security requirements. The matrix can be adapted across different types of PaaS cloud models based on individual security requirements and service level objectives identified by PaaS cloud customers.
In many scientific fields, simulations and analyses require compositions of computational entities such as web-services, programs, and applications. In such fields, users may want various trade-offs between different qualities. Examples include: (i) performing a quick approximation vs. an accurate, but slower, experiment, (ii) using local slower execution environments vs. remote, but advanced, computing facilities, (iii) using quicker approximation algorithms vs. computationally expensive algorithms with smaller data. However, such trade-offs are difficult to make as many such decisions today are either (a) wired into a fixed configuration and cannot be changed, or (b) require detailed systems knowledge and experimentation to determine what configuration to use. In this paper we propose an approach that uses architectural models coupled with automated design space generation for making fidelity and timeliness trade-offs. We illustrate this approach through an example in the intelligence analysis domain.
As new market opportunities, technologies, platforms, and frameworks become available, systems require large-scale and systematic architectural restructuring to accommodate them. Today's architects have few techniques to help them plan this architecture evolution. In particular, they have little assistance in planning alternative evolution paths, trading off various aspects of the different paths, or knowing best practices for particular domains. In this paper, we describe an approach for planning and reasoning about architecture evolution. Our approach focuses on providing architects with the means to model prospective evolution paths and supporting analysis to select among these candidate paths. To demonstrate the usefulness of our approach, we show how it can be applied to an actual architecture evolution. In addition, we present some theoretical results about our evolution path constraint specification language.
The importance and potential advantages with a comprehensive product architecture description are well described in the literature. However, developing such a description takes additional resources, and it is difficult to maintain consistency with evolving implementations. This paper presents an approach and industrial experience which is based on architecture recovery from source code at truck manufacturer Scania CV AB. The extracted representation of the architecture is presented in several views and verified on CAN signal level. Lessons learned are discussed.
Shared resources are an essential part of cloud computing. Virtualization and multi-tenancy provide a number of advantages for increasing resource utilization and for providing on demand elasticity. However, these cloud features also raise many security concerns related to cloud computing resources. In this paper, we propose an architecture and approach for leveraging the virtualization technology at the core of cloud computing to perform intrusion detection security using hypervisor performance metrics. Through the use of virtual machine performance metrics gathered from hypervisors, such as packets transmitted/received, block device read/write requests, and CPU utilization, we demonstrate and verify that suspicious activities can be profiled without detailed knowledge of the operating system running within the virtual machines. The proposed hypervisor-based cloud intrusion detection system does not require additional software installed in virtual machines and has many advantages compared to host-based and network based intrusion detection systems which can complement these traditional approaches to intrusion detection.
Nowadays, our surrounding environment is more and more scattered with various types of sensors. Due to their intrinsic properties and representation formats, they form small islands isolated from each other. In order to increase interoperability and release their full capabilities, we propose to represent devices descriptions including data and service invocation with a common model allowing to compose mashups of heterogeneous sensors. Pushing this paradigm further, we also propose to augment service descriptions with a discovery protocol easing automatic assimilation of knowledge. In this work, we describe the architecture supporting what can be called a Semantic Sensor Web-of-Things. As proof of concept, we apply our proposal to the domain of smart buildings, composing a novel ontology covering heterogeneous sensing, actuation and service invocation. Our architecture also emphasizes on the energetic aspect and is optimized for constrained environments.
This paper presents on-going research to define the basic models and architecture patterns for federated access control in heterogeneous (multi-provider) multi-cloud and inter-cloud environment. The proposed research contributes to the further definition of Intercloud Federation Framework (ICFF) which is a part of the general Intercloud Architecture Framework (ICAF) proposed by authors in earlier works. ICFF attempts to address the interoperability and integration issues in provisioning on-demand multi-provider multi-domain heterogeneous cloud infrastructure services. The paper describes the major inter-cloud federation scenarios that in general involve two types of federations: customer-side federation that includes federation between cloud based services and customer campus or enterprise infrastructure, and provider-side federation that is created by a group of cloud providers to outsource or broker their resources when provisioning services to customers. The proposed federated access control model uses Federated Identity Management (FIDM) model that can be also supported by the trusted third party entities such as Cloud Service Broker (CSB) and/or trust broker to establish dynamic trust relations between entities without previously existing trust. The research analyses different federated identity management scenarios, defines the basic architecture patterns and the main components of the distributed federated multi-domain Authentication and Authorisation infrastructure.
Using heterogeneous clouds has been considered to improve performance of big-data analytics for healthcare platforms. However, the problem of the delay when transferring big-data over the network needs to be addressed. The purpose of this paper is to analyze and compare existing cloud computing environments (PaaS, IaaS) in order to implement middleware services. Understanding the differences and similarities between cloud technologies will help in the interconnection of healthcare platforms. The paper provides a general overview of the techniques and interfaces for cloud computing middleware services, and proposes a cloud architecture for healthcare. Cloud middleware enables heterogeneous devices to act as data sources and to integrate data from other healthcare platforms, but specific APIs need to be developed. Furthermore, security and management problems need to be addressed, given the heterogeneous nature of the communication and computing environment. The present paper fills a gap in the electronic healthcare register literature by providing an overview of cloud computing middleware services and standardized interfaces for the integration with medical devices.