Biblio
Cyber-physical systems (CPSs), due to their direct influence on the physical world, have to meet extended security and dependability requirements. This is particularly true for CPS that operate in close proximity to humans or that control resources that, when tampered with, put all our lives at stake. In this paper, we review the challenges and some early solutions that arise at the architectural and operating-system level when we require cyber-physical systems and CPS infrastructure to withstand advanced and persistent threats. We found that although some of the challenges we identified are already matched by rudimentary solutions, further research is required to ensure sustainable and dependable operation of physically exposed CPS infrastructure and, more importantly, to guarantee graceful degradation in case of malfunction or attack.
A frequent claim that has not been validated is that signature based network intrusion detection systems (SNIDS) cannot detect zero-day attacks. This paper studies this property by testing 356 severe attacks on the SNIDS Snort, configured with an old official rule set. Of these attacks, 183 attacks are zero-days' to the rule set and 173 attacks are theoretically known to it. The results from the study show that Snort clearly is able to detect zero-days' (a mean of 17% detection). The detection rate is however on overall greater for theoretically known attacks (a mean of 54% detection). The paper then investigates how the zero-days' are detected, how prone the corresponding signatures are to false alarms, and how easily they can be evaded. Analyses of these aspects suggest that a conservative estimate on zero-day detection by Snort is 8.2%.
The importance and potential advantages with a comprehensive product architecture description are well described in the literature. However, developing such a description takes additional resources, and it is difficult to maintain consistency with evolving implementations. This paper presents an approach and industrial experience which is based on architecture recovery from source code at truck manufacturer Scania CV AB. The extracted representation of the architecture is presented in several views and verified on CAN signal level. Lessons learned are discussed.
Using heterogeneous clouds has been considered to improve performance of big-data analytics for healthcare platforms. However, the problem of the delay when transferring big-data over the network needs to be addressed. The purpose of this paper is to analyze and compare existing cloud computing environments (PaaS, IaaS) in order to implement middleware services. Understanding the differences and similarities between cloud technologies will help in the interconnection of healthcare platforms. The paper provides a general overview of the techniques and interfaces for cloud computing middleware services, and proposes a cloud architecture for healthcare. Cloud middleware enables heterogeneous devices to act as data sources and to integrate data from other healthcare platforms, but specific APIs need to be developed. Furthermore, security and management problems need to be addressed, given the heterogeneous nature of the communication and computing environment. The present paper fills a gap in the electronic healthcare register literature by providing an overview of cloud computing middleware services and standardized interfaces for the integration with medical devices.
The term Cloud Computing is not something that appeared overnight, it may come from the time when computer system remotely accessed the applications and services. Cloud computing is Ubiquitous technology and receiving a huge attention in the scientific and industrial community. Cloud computing is ubiquitous, next generation's in-formation technology architecture which offers on-demand access to the network. It is dynamic, virtualized, scalable and pay per use model over internet. In a cloud computing environment, a cloud service provider offers “house of resources” includes applications, data, runtime, middleware, operating system, virtualization, servers, data storage and sharing and networking and tries to take up most of the overhead of client. Cloud computing offers lots of benefits, but the journey of the cloud is not very easy. It has several pitfalls along the road because most of the services are outsourced to third parties with added enough level of risk. Cloud computing is suffering from several issues and one of the most significant is Security, privacy, service availability, confidentiality, integrity, authentication, and compliance. Security is a shared responsibility of both client and service provider and we believe security must be information centric, adaptive, proactive and built in. Cloud computing and its security are emerging study area nowadays. In this paper, we are discussing about data security in cloud at the service provider end and proposing a network storage architecture of data which make sure availability, reliability, scalability and security.
The wide deployment of general purpose and embedded microprocessors has emphasized the need for defenses against cyber-attacks. Due to the globalized supply chain, however, there are several stages where a processor can be maliciously modified. The most promising stage, and the hardest during which to inject the hardware trojan, is the fabrication stage. As modern microprocessor chips are characterized by very dense, billion-transistor designs, such attacks must be very carefully crafted. In this paper, we demonstrate zero overhead malicious modifications on both high-performance and embedded microprocessors. These hardware trojans enable privilege escalation through execution of an instruction stream that excites the necessary conditions to make the modification appear. The minimal footprint, however, comes at the cost of a small window of attack opportunities. Experimental results show that malicious users can gain escalated privileges within a few million clock cycles. In addition, no system crashes were reported during normal operation, rendering the modifications transparent to the end user.
This paper presents a novel architecture to manage identity and access (IAM) in a Multi-tier cloud infrastructure, in which most services are supported by massive-scale data centres over the Internet. Multi-tier cloud infrastructure uses tier-based model from Software Engineering to provide resources in different tires. In this paper we focus on design and implementation of a centralized identity and access management system for the multi-tier cloud infrastructure. First, we discuss identity and access management requirements in such an environment and propose our solution to address these requirements. Next, we discuss approaches to improve performance of the IAM system and make it scalable to billions of users. Finally, we present experimental results based on the current deployment in the SAVI Testbed. We show that our IAM system outperforms the previously proposed IAM systems for cloud infrastructure by factor 9 in throughput when the number of users is small, it handle about 50 times more requests in peak usage. Because our architecture is a combination of Green-thread and load balanced process, it uses less systems resources, and easily scales up to address high number of requests.
This paper deals with the design and implementation of the post-quantum public-key algorithm McEliece. Seamless incorporation of a new error generator and new SHA-3 module provides higher indeterminacy and more randomization of the original McEliece algorithm and achieves CCA2 security standard. Due to the lightweight and high-speed implementation of SHA-3 module the proposed 128-bit secure McEliece architecture provides 6% higher performance in only 0.78 times area of the best known existing design.
Today, cloud networking which is the ability to connect the user with his cloud services and to interconnect these services within an inter-cloud approach, is one of the recent research areas in the cloud computing research communities. The main drawback of cloud networking consists in the lack of Quality of Service (QoS) guarantee and management in conformance with a corresponding Service Level Agreement (SLA). Several research works have been proposed for the SLA establishing in cloud computing, but not in cloud networking. In this paper, we propose an architecture for self-establishing an end-to-end service level agreement between a Cloud Service User (CSU) and a Cloud Service Provider (CSP) in a cloud networking environment. We focus on QoS parameters for NaaS and IaaS services. The architecture ensures a self-establishing of the proposed SLA using autonomic cloud managers.
In the paper a programmable management framework for SDN networks is presented. The concept is in-line with SDN philosophy - it can be programmed from scratch. The implemented management functions can be case dependent. The concept introduces a new node in the SDN architecture, namely the SDN manager. In compliance with the latest trends in network management the approach allows for embedded management of all network nodes and gradual implementation of management functions providing their code lifecycle management as well as the ability to on-the-fly code update. The described concept is a bottom-up approach, which key element is distributed execution environment (PDEE) that is based on well-established technologies like OSGI and FIPA. The described management idea has strong impact on the evolution of the SDN architecture, because the proposed distributed execution environment is a generic one, therefore it can be used not only for the management, but also for distributing of control or application functions.
The Internet of Things (IOT) is a network of networks where massively large numbers of objects or things are interconnected to each other through the network. The Internet of Things brings along many new possibilities of applications to improve human comfort and quality of life. Complex systems such as the Internet of Things are difficult to manage because of the emergent behaviours that arise from the complex interactions between its constituent parts. Our key contribution in the paper is a proposed multiagent web for the Internet of Things. Corresponding data management architecture is also proposed. The multiagent architecture provides autonomic characteristics for IOT making the IOT manageable. In addition, the multiagent web allows for flexible processing on heterogeneous platforms as we leverage off web protocols such as HTTP and language independent data formats such as JSON for communications between agents. The architecture we proposed enables a scalable architecture and infrastructure for a web-scale multiagent Internet of Things.
Routers in the Content-Centric Networking (CCN) architecture maintain state for all pending content requests, so as to be able to later return the corresponding content. By employing stateful forwarding, CCN supports native multicast, enhances security and enables adaptive forwarding, at the cost of excessive forwarding state that raises scalability concerns. We propose a semi-stateless forwarding scheme in which, instead of tracking each request at every on-path router, requests are tracked at every d hops. At intermediate hops, requests gather reverse path information, which is later used to deliver responses between routers using Bloom filter-based stateless forwarding. Our approach effectively reduces forwarding state, while preserving the advantages of CCN forwarding. Evaluation results over realistic ISP topologies show that our approach reduces forwarding state by 54%-70% in unicast delivery, without any bandwidth penalties, while in multicast delivery it reduces forwarding state by 34%-55% at the expense of 6%-13% in bandwidth overhead.
Tactical communication networks lack infrastructure and are highly dynamic, resource-constrained, and commonly targeted by adversaries. Designing efficient and secure applications for this environment is extremely challenging. An increasing reliance on group-oriented, tactical applications such as chat, situational awareness, and real-time video has generated renewed interest in IP multicast delivery. However, a lack of developer tools, software libraries, and standard paradigms to achieve secure and reliable multicast impedes the potential of group-oriented communication and often leads to inefficient communication models. In this paper, we propose an architecture for secure and reliable group-oriented communication. The architecture utilizes NSA Suite B cryptography and may be appropriate for handling sensitive and DoD classified data up to SECRET. Our proposed architecture is unique in that it requires no infrastructure, follows NSA CSfC guidance for layered security, and leverages NORM for multicast data reliability. We introduce each component of the architecture and describe a Linux-based software prototype.
Sensors of diverse capabilities and modalities, carried by us or deeply embedded in the physical world, have invaded our personal, social, work, and urban spaces. Our relationship with these sensors is a complicated one. On the one hand, these sensors collect rich data that are shared and disseminated, often initiated by us, with a broad array of service providers, interest groups, friends, and family. Embedded in this data is information that can be used to algorithmically construct a virtual biography of our activities, revealing intimate behaviors and lifestyle patterns. On the other hand, we and the services we use, increasingly depend directly and indirectly on information originating from these sensors for making a variety of decisions, both routine and critical, in our lives. The quality of these decisions and our confidence in them depend directly on the quality of the sensory information and our trust in the sources. Sophisticated adversaries, benefiting from the same technology advances as the sensing systems, can manipulate sensory sources and analyze data in subtle ways to extract sensitive knowledge, cause erroneous inferences, and subvert decisions. The consequences of these compromises will only amplify as our society increasingly complex human-cyber-physical systems with increased reliance on sensory information and real-time decision cycles.Drawing upon examples of this two-faceted relationship with sensors in applications such as mobile health and sustainable buildings, this talk will discuss the challenges inherent in designing a sensor information flow and processing architecture that is sensitive to the concerns of both producers and consumer. For the pervasive sensing infrastructure to be trusted by both, it must be robust to active adversaries who are deceptively extracting private information, manipulating beliefs and subverting decisions. While completely solving these challenges would require a new science of resilient, secure and trustworthy networked sensing and decision systems that would combine hitherto disciplines of distributed embedded systems, network science, control theory, security, behavioral science, and game theory, this talk will provide some initial ideas. These include an approach to enabling privacy-utility trade-offs that balance the tension between risk of information sharing to the producer and the value of information sharing to the consumer, and method to secure systems against physical manipulation of sensed information.
We propose that to address the growing problems with complexity and data volumes in HPC security wee need to refactor how we look at data by creating tools that not only select data, but analyze and represent it in a manner well suited for intuitive analysis. We propose a set of rules describing what this means, and provide a number of production quality tools that represent our current best effort in implementing these ideas.
We introduce a cloud-enabled defense mechanism for Internet services against network and computational Distributed Denial-of-Service (DDoS) attacks. Our approach performs selective server replication and intelligent client re-assignment, turning victim servers into moving targets for attack isolation. We introduce a novel system architecture that leverages a "shuffling" mechanism to compute the optimal re-assignment strategy for clients on attacked servers, effectively separating benign clients from even sophisticated adversaries that persistently follow the moving targets. We introduce a family of algorithms to optimize the runtime client-to-server re-assignment plans and minimize the number of shuffles to achieve attack mitigation. The proposed shuffling-based moving target mechanism enables effective attack containment using fewer resources than attack dilution strategies using pure server expansion. Our simulations and proof-of-concept prototype using Amazon EC2 [1] demonstrate that we can successfully mitigate large-scale DDoS attacks in a small number of shuffles, each of which incurs a few seconds of user-perceived latency.
Traffic from mobile wireless networks has been growing at a fast pace in recent years and is expected to surpass wired traffic very soon. Service providers face significant challenges at such scales including providing seamless mobility, efficient data delivery, security, and provisioning capacity at the wireless edge. In the Mobility First project, we have been exploring clean slate enhancements to the network protocols that can inherently provide support for at-scale mobility and trustworthiness in the Internet. An extensible data plane using pluggable compute-layer services is a key component of this architecture. We believe these extensions can be used to implement in-network services to enhance mobile end-user experience by either off-loading work and/or traffic from mobile devices, or by enabling en-route service-adaptation through context-awareness (e.g., Knowing contemporary access bandwidth). In this work we present details of the architectural support for in-network services within Mobility First, and propose protocol and service-API extensions to flexibly address these pluggable services from end-points. As a demonstrative example, we implement an in network service that does rate adaptation when delivering video streams to mobile devices that experience variable connection quality. We present details of our deployment and evaluation of the non-IP protocols along with compute-layer extensions on the GENI test bed, where we used a set of programmable nodes across 7 distributed sites to configure a Mobility First network with hosts, routers, and in-network compute services.
Content distribution in the Internet places content providers in a dominant position, with delivery happening directly between two end-points, that is, from content providers to consumers. Information-Centrism has been proposed as a paradigm shift from the host-to-host Internet to a host-to-content one, or in other words from an end-to-end communication system to a native distribution network. This trend has attracted the attention of the research community, which has argued that content, instead of end-points, must be at the center stage of attention. Given this emergence of information-centric solutions, the relevant management needs in terms of performance have not been adequately addressed, yet they are absolutely essential for relevant network operations and crucial for the information-centric approaches to succeed. Performance management and traffic engineering approaches are also required to control routing, to configure the logic for replacement policies in caches and to control decisions where to cache, for instance. Therefore, there is an urgent need to manage information-centric resources and in fact to constitute their missing management and control plane which is essential for their success as clean-slate technologies. In this thesis we aim to provide solutions to crucial problems that remain, such as the management of information-centric approaches which has not yet been addressed, focusing on the key aspect of route and cache management.
Smart Grid is the trend of next generation power distribution and network management that enable a two -- way interactive communication and operation between consumers and suppliers, so as to achieve intelligent resource management and optimization. The wireless mesh network technology is a promising infrastructure solution to support these smart functionalities, while it has some inherent vulnerabilities and cyber-attack risks to be addressed. As Smart Grid is heavily relying on the underlie communication networks, which makes their security and dependability issues critical to the entire smart grid technology. Several studies have been conducted in the field of Smart Grid security, but few works were focused on the dependability and its associated resource analysis of the control center networks. In this paper, we have investigated the dependability modeling and also resource allocation in redundant communication networks by adopting two mathematical approaches, Reliability Block Diagrams (RBD) and Stochastic Petri Nets (SPNs), to analyze the dependability of control center networks in Smart Grid environment. We have applied our proposed modeling approach in an extensive case study to evaluate the availability of smart gird networks with different redundancy mechanisms. A combination of dependability models and reliability importance are used to analyze the network availability according to the most important components. We also show the variation of network availability in accordance with Mean Time to Failure (MTTF) in different network architectures.
Smart grid is a technological innovation that improves efficiency, reliability, economics, and sustainability of electricity services. It plays a crucial role in modern energy infrastructure. The main challenges of smart grids, however, are how to manage different types of front-end intelligent devices such as power assets and smart meters efficiently; and how to process a huge amount of data received from these devices. Cloud computing, a technology that provides computational resources on demands, is a good candidate to address these challenges since it has several good properties such as energy saving, cost saving, agility, scalability, and flexibility. In this paper, we propose a secure cloud computing based framework for big data information management in smart grids, which we call “Smart-Frame.” The main idea of our framework is to build a hierarchical structure of cloud computing centers to provide different types of computing services for information management and big data analysis. In addition to this structural framework, we present a security solution based on identity-based encryption, signature and proxy re-encryption to address critical security issues of the proposed framework.
When a user accesses a resource, the accounting process at the server side does the job of keeping track of the resource usage so as to charge the user. In cloud computing, a user may use more than one service provider and need two independent service providers to work together. In this user-centric context, the user is the owner of the information and has the right to authorize to a third party application to access the protected resource on the user's behalf. Therefore, the user also needs to monitor the authorized resource usage he granted to third party applications. However, the existing accounting protocols were proposed to monitor the resource usage in terms of how the user uses the resource from the service provider. This paper proposed the user-centric accounting model called AccAuth which designs an accounting layer to an OAuth protocol. Then the prototype was implemented, and the proposed model was evaluated against the standard requirements. The result showed that AccAuth passed all the requirements.
Efficient authentication, authorization, and accounting (AAA) management mechanisms will be key for the widespread adoption of SDN experimentation facilities beyond the confines of academic labs. In particular, we are interested in a robust AAA infrastructure to identify experimenters, police their actions based on the associated roles, facilitate secure resource sharing, and provide for detailed accountability. Currently, however, said facilities are forced to employ a patchy AAA infrastructure which lacks several of the aforementioned features. This paper proposes a certificate-based AAA architecture for SDN experimental facilities, which is by design both secure and flexible. As this work is implementation-driven and aims for a short deployment cycle in current facilities, we also outline a credible migration path which we are currently pursuing actively.
Location privacy preservation has become an important issue in providing location based services (LBSs). When the mobile users report their locations to the LBS server or the third-party servers, they risk the leak of their location information if such servers are compromised. To address this issue, we propose a Location Privacy Preservation Scheme (LPPS) based on distributed cache pushing which is based on Markov Chain. The LPPS deploys distributed cache proxies in the most frequently visited areas to store the most popular location-related data and pushes them to mobile users passing by. In the way that the mobile users receive the popular location-related data from the cache proxies without reporting their real locations, the users' location privacy is well preserved, which is shown to achieve k-anonymity. Extensive experiments illustrate that the proposed LPPS achieve decent service coverage ratio and cache hit ratio with low communication overhead.
Trusted Platform Module (TPM) has gained its popularity in computing systems as a hardware security approach. TPM provides the boot time security by verifying the platform integrity including hardware and software. However, once the software is loaded, TPM can no longer protect the software execution. In this work, we propose a dynamic TPM design, which performs control flow checking to protect the program from runtime attacks. The control flow checker is integrated at the commit stage of the processor pipeline. The control flow of program is verified to defend the attacks such as stack smashing using buffer overflow and code reuse. We implement the proposed dynamic TPM design in FPGA to achieve high performance, low cost and flexibility for easy functionality upgrade based on FPGA. In our design, neither the source code nor the Instruction Set Architecture (ISA) needs to be changed. The benchmark simulations demonstrate less than 1% of performance penalty on the processor, and an effective software protection from the attacks.
Near Field Communication (NFC)-based mobile phone services offer a lifeline to the under-appreciated multiapplication smart card initiative. The initiative could effectively replace heavy wallets full of smart cards for mundane tasks. However, the issue of the deployment model still lingers on. Possible approaches include, but are not restricted to, the User Centric Smart card Ownership Model (UCOM), GlobalPlatform Consumer Centric Model, and Trusted Service Manager (TSM). In addition, multiapplication smart card architecture can be a GlobalPlatform Trusted Execution Environment (TEE) and/or User Centric Tamper-Resistant Device (UCTD), which provide cross-device security and privacy preservation platforms to their users. In the multiapplication smart card environment, there might not be a prior off-card trusted relationship between a smart card and an application provider. Therefore, as a possible solution to overcome the absence of prior trusted relationships, this paper proposes the concept of Trusted Platform Module (TPM) for smart cards (embedded devices) that can act as a point of reference for establishing the necessary trust between the device and an application provider, and among applications.