Visible to the public Biblio

Filters: Keyword is distributed processing  [Clear All Filters]
2019-12-30
Bousselham, Mhidi, Benamar, Nabil, Addaim, Adnane.  2019.  A new Security Mechanism for Vehicular Cloud Computing Using Fog Computing System. 2019 International Conference on Wireless Technologies, Embedded and Intelligent Systems (WITS). :1–4.

Recently Vehicular Cloud Computing (VCC) has become an attractive solution that support vehicle's computing and storing service requests. This computing paradigm insures a reduced energy consumption and low traffic congestion. Additionally, VCC has emerged as a promising technology that provides a virtual platform for processing data using vehicles as infrastructures or centralized data servers. However, vehicles are deployed in open environments where they are vulnerable to various types of attacks. Furthermore, traditional cryptographic algorithms failed in insuring security once their keys compromised. In order to insure a secure vehicular platform, we introduce in this paper a new decoy technology DT and user behavior profiling (UBP) as an alternative solution to overcome data security, privacy and trust in vehicular cloud servers using a fog computing architecture. In the case of a malicious behavior, our mechanism shows a high efficiency by delivering decoy files in such a way making the intruder unable to differentiate between the original and decoy file.

2019-03-06
Cuzzocrea, A., Damiani, E..  2018.  Pedigree-Ing Your Big Data: Data-Driven Big Data Privacy in Distributed Environments. 2018 18th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID). :675-681.
This paper introduces a general framework for supporting data-driven privacy-preserving big data management in distributed environments, such as emerging Cloud settings. The proposed framework can be viewed as an alternative to classical approaches where the privacy of big data is ensured via security-inspired protocols that check several (protocol) layers in order to achieve the desired privacy. Unfortunately, this injects considerable computational overheads in the overall process, thus introducing relevant challenges to be considered. Our approach instead tries to recognize the "pedigree" of suitable summary data representatives computed on top of the target big data repositories, hence avoiding computational overheads due to protocol checking. We also provide a relevant realization of the framework above, the so-called Data-dRIven aggregate-PROvenance privacypreserving big Multidimensional data (DRIPROM) framework, which specifically considers multidimensional data as the case of interest.
Jaeger, D., Cheng, F., Meinel, C..  2018.  Accelerating Event Processing for Security Analytics on a Distributed In-Memory Platform. 2018 IEEE 16th Intl Conf on Dependable, Autonomic and Secure Computing, 16th Intl Conf on Pervasive Intelligence and Computing, 4th Intl Conf on Big Data Intelligence and Computing and Cyber Science and Technology Congress(DASC/PiCom/DataCom/CyberSciTech). :634-643.

The analysis of security-related event logs is an important step for the investigation of cyber-attacks. It allows tracing malicious activities and lets a security operator find out what has happened. However, since IT landscapes are growing in size and diversity, the amount of events and their highly different representations are becoming a Big Data challenge. Unfortunately, current solutions for the analysis of security-related events, so called Security Information and Event Management (SIEM) systems, are not able to keep up with the load. In this work, we propose a distributed SIEM platform that makes use of highly efficient distributed normalization and persists event data into an in-memory database. We implement the normalization on common distribution frameworks, i.e. Spark, Storm, Trident and Heron, and compare their performance with our custom-built distribution solution. Additionally, different tuning options are introduced and their speed advantage is presented. In the end, we show how the writing into an in-memory database can be tuned to achieve optimal persistence speed. Using the proposed approach, we are able to not only fully normalize, but also persist more than 20 billion events per day with relatively small client hardware. Therefore, we are confident that our approach can handle the load of events in even very large IT landscapes.

2019-02-13
Ahmed, N., Talib, M. A., Nasir, Q..  2018.  Program-flow attestation of IoT systems software. 2018 15th Learning and Technology Conference (L T). :67–73.
Remote attestation is the process of measuring the integrity of a device over the network, by detecting modification of software or hardware from the original configuration. Several remote software-based attestation mechanisms have been introduced, that rely on strict time constraints and other impractical constraints that make them inconvenient for IoT systems. Although some research is done to address these issues, they integrated trusted hardware devices to the attested devices to accomplish their aim, which is costly and not convenient for many use cases. In this paper, we propose “Dual Attestation” that includes two stages: static and dynamic. The static attestation phase checks the memory of the attested device. The dynamic attestation technique checks the execution correctness of the application code and can detect the runtime attacks. The objectives are to minimize the overhead and detect these attacks, by developing an optimized dynamic technique that checks the application program flow. The optimization will be done in the prover and the verifier sides.
2019-01-21
Feng, S., Xiong, Z., Niyato, D., Wang, P., Leshem, A..  2018.  Evolving Risk Management Against Advanced Persistent Threats in Fog Computing. 2018 IEEE 7th International Conference on Cloud Networking (CloudNet). :1–6.
With the capability of support mobile computing demand with small delay, fog computing has gained tremendous popularity. Nevertheless, its highly virtualized environment is vulnerable to cyber attacks such as emerging Advanced Persistent Threats attack. In this paper, we propose a novel approach of cyber risk management for the fog computing platform. Particularly, we adopt the cyber-insurance as a tool for neutralizing cyber risks from fog computing platform. We consider a fog computing platform containing a group of fog nodes. The platform is composed of three main entities, i.e., the fog computing provider, attacker, and cyber-insurer. The fog computing provider dynamically optimizes the allocation of its defense computing resources to improve the security of the fog computing platform. Meanwhile, the attacker dynamically adjusts the allocation of its attack resources to improve the probability of successful attack. Additionally, to prevent from the potential loss due to attacks, the provider also makes a dynamic decision on the purchases ratio of cyber-insurance from the cyber-insurer for each fog node. Thereafter, the cyber-insurer accordingly determines the premium of cyber-insurance for each fog node. In our formulated dynamic Stackelberg game, the attacker and provider act as the followers, and the cyber-insurer acts as the leader. In the lower level, we formulate an evolutionary subgame to analyze the provider's defense and cyber-insurance subscription strategies as well as the attacker's attack strategy. In the upper level, the cyber-insurer optimizes its premium determination strategy, taking into account the evolutionary equilibrium at the lower-level evolutionary subgame. We analytically prove that the evolutionary equilibrium is unique and stable. Moreover, we provide a series of insightful analytical and numerical results on the equilibrium of the dynamic Stackelberg game.
Kittmann, T., Lambrecht, J., Horn, C..  2018.  A privacy-aware distributed software architecture for automation services in compliance with GDPR. 2018 IEEE 23rd International Conference on Emerging Technologies and Factory Automation (ETFA). 1:1067–1070.

The recently applied General Data Protection Regulation (GDPR) aims to protect all EU citizens from privacy and data breaches in an increasingly data-driven world. Consequently, this deeply affects the factory domain and its human-centric automation paradigm. Especially collaboration of human and machines as well as individual support are enabled and enhanced by processing audio and video data, e.g. by using algorithms which re-identify humans or analyse human behaviour. We introduce most significant impacts of the recent legal regulation change towards the automations domain at a glance. Furthermore, we introduce a representative scenario from production, deduce its legal affections from GDPR resulting in a privacy-aware software architecture. This architecture covers modern virtualization techniques along with authorization and end-to-end encryption to ensure a secure communication between distributes services and databases for distinct purposes.

2018-12-10
Oyekanlu, E..  2018.  Distributed Osmotic Computing Approach to Implementation of Explainable Predictive Deep Learning at Industrial IoT Network Edges with Real-Time Adaptive Wavelet Graphs. 2018 IEEE First International Conference on Artificial Intelligence and Knowledge Engineering (AIKE). :179–188.
Challenges associated with developing analytics solutions at the edge of large scale Industrial Internet of Things (IIoT) networks close to where data is being generated in most cases involves developing analytics solutions from ground up. However, this approach increases IoT development costs and system complexities, delay time to market, and ultimately lowers competitive advantages associated with delivering next-generation IoT designs. To overcome these challenges, existing, widely available, hardware can be utilized to successfully participate in distributed edge computing for IIoT systems. In this paper, an osmotic computing approach is used to illustrate how distributed osmotic computing and existing low-cost hardware may be utilized to solve complex, compute-intensive Explainable Artificial Intelligence (XAI) deep learning problem from the edge, through the fog, to the network cloud layer of IIoT systems. At the edge layer, the C28x digital signal processor (DSP), an existing low-cost, embedded, real-time DSP that has very wide deployment and integration in several IoT industries is used as a case study for constructing real-time graph-based Coiflet wavelets that could be used for several analytic applications including deep learning pre-processing applications at the edge and fog layers of IIoT networks. Our implementation is the first known application of the fixed-point C28x DSP to construct Coiflet wavelets. Coiflet Wavelets are constructed in the form of an osmotic microservice, using embedded low-level machine language to program the C28x at the network edge. With the graph-based approach, it is shown that an entire Coiflet wavelet distribution could be generated from only one wavelet stored in the C28x based edge device, and this could lead to significant savings in memory at the edge of IoT networks. Pearson correlation coefficient is used to select an edge generated Coiflet wavelet and the selected wavelet is used at the fog layer for pre-processing and denoising IIoT data to improve data quality for fog layer based deep learning application. Parameters for implementing deep learning at the fog layer using LSTM networks have been determined in the cloud. For XAI, communication network noise is shown to have significant impact on results of predictive deep learning at IIoT network fog layer.
Castiglione, A., Choo, K. Raymond, Nappi, M., Ricciardi, S..  2017.  Context Aware Ubiquitous Biometrics in Edge of Military Things. IEEE Cloud Computing. 4:16–20.

Edge computing can potentially play a crucial role in enabling user authentication and monitoring through context-aware biometrics in military/battlefield applications. For example, in Internet of Military Things (IoMT) or Internet of Battlefield Things (IoBT),an increasing number of ubiquitous sensing and computing devices worn by military personnel and embedded within military equipment (combat suit, instrumented helmets, weapon systems, etc.) are capable of acquiring a variety of static and dynamic biometrics (e.g., face, iris, periocular, fingerprints, heart-rate, gait, gestures, and facial expressions). Such devices may also be capable of collecting operational context data. These data collectively can be used to perform context-adaptive authentication in-the-wild and continuous monitoring of soldier's psychophysical condition in a dedicated edge computing architecture.

Wang, Y., Ren, Z., Zhang, H., Hou, X., Xiao, Y..  2018.  “Combat Cloud-Fog” Network Architecture for Internet of Battlefield Things and Load Balancing Technology. 2018 IEEE International Conference on Smart Internet of Things (SmartIoT). :263–268.

Recently, the armed forces want to bring the Internet of Things technology to improve the effectiveness of military operations in battlefield. So the Internet of Battlefield Things (IoBT) has entered our view. And due to the high processing latency and low reliability of the “combat cloud” network for IoBT in the battlefield environment, in this paper , a novel “combat cloud-fog” network architecture for IoBT is proposed. The novel architecture adds a fog computing layer which consists of edge network equipment close to the users in the “combat-cloud” network to reduce latency and enhance reliability. Meanwhile, since the computing capability of the fog equipment are weak, it is necessary to implement distributed computing in the “combat cloud-fog” architecture. Therefore, the distributed computing load balancing problem of the fog computing layer is researched. Moreover, a distributed generalized diffusion strategy is proposed to decrease latency and enhance the stability and survivability of the “combat cloud-fog” network system. The simulation result indicates that the load balancing strategy based on generalized diffusion algorithm could decrease the task response latency and support the efficient processing of battlefield information effectively, which is suitable for the “combat cloud- fog” network architecture.

2018-10-26
Rauf, A., Shaikh, R. A., Shah, A..  2018.  Security and privacy for IoT and fog computing paradigm. 2018 15th Learning and Technology Conference (L T). :96–101.

In the past decade, the revolution in miniaturization (microprocessors, batteries, cameras etc.) and manufacturing of new type of sensors resulted in a new regime of applications based on smart objects called IoT. Majority of such applications or services are to ease human life and/or to setup efficient processes in automated environments. However, this convenience is coming up with new challenges related to data security and human privacy. The objects in IoT are resource constrained devices and cannot implement a fool-proof security framework. These end devices work like eyes and ears to interact with the physical world and collect data for analytics to make expedient decisions. The storage and analysis of the collected data is done remotely using cloud computing. The transfer of data from IoT to the computing clouds can introduce privacy issues and network delays. Some applications need a real-time decision and cannot tolerate the delays and jitters in the network. Here, edge computing or fog computing plays its role to settle down the mentioned issues by providing cloud-like facilities near the end devices. In this paper, we discuss IoT, fog computing, the relationship between IoT and fog computing, their security issues and solutions by different researchers. We summarize attack surface related to each layer of this paradigm which will help to propose new security solutions to escalate it acceptability among end users. We also propose a risk-based trust management model for smart healthcare environment to cope with security and privacy-related issues in this highly un-predictable heterogeneous ecosystem.

He, S., Cheng, B., Wang, H., Xiao, X., Cao, Y., Chen, J..  2018.  Data security storage model for fog computing in large-scale IoT application. IEEE INFOCOM 2018 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS). :39–44.

With the scale of big data increasing in large-scale IoT application, fog computing is a recent computing paradigm that is extending cloud computing towards the edge of network in the field. There are a large number of storage resources placed on the edge of the network to form a geographical distributed storage system in fog computing system (FCS). It is used to store the big data collected by the fog computing nodes and to reduce the management costs for moving big data to the cloud. However, the storage of fog nodes at the edge of the network faces a direct attack of external threats. In order to improve the security of the storage of fog nodes in FCS, in this paper, we proposed a data security storage model for fog computing (FCDSSM) to realize the integration of storage and security management in large-scale IoT application. We designed a detail of the FCDSSM system architecture, gave a design of the multi-level trusted domain, cooperative working mechanism, data synchronization and key management strategy for the FCDSSM. Experimental results show that the loss of computing and communication performance caused by data security storage in the FCDSSM is within the acceptable range, and the FCDSSM has good scalability. It can be adapted to big data security storage in large-scale IoT application.

2018-09-28
Lu, Z., Shen, H..  2017.  A New Lower Bound of Privacy Budget for Distributed Differential Privacy. 2017 18th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT). :25–32.

Distributed data aggregation via summation (counting) helped us to learn the insights behind the raw data. However, such computing suffered from a high privacy risk of malicious collusion attacks. That is, the colluding adversaries infer a victim's privacy from the gaps between the aggregation outputs and their source data. Among the solutions against such collusion attacks, Distributed Differential Privacy (DDP) shows a significant effect of privacy preservation. Specifically, a DDP scheme guarantees the global differential privacy (the presence or absence of any data curator barely impacts the aggregation outputs) by ensuring local differential privacy at the end of each data curator. To guarantee an overall privacy performance of a distributed data aggregation system against malicious collusion attacks, part of the existing work on such DDP scheme aim to provide an estimated lower bound of privacy budget for the global differential privacy. However, there are two main problems: low data utility from using a large global function sensitivity; unknown privacy guarantee when the aggregation sensitivity of the whole system is less than the sum of the data curator's aggregation sensitivity. To address these problems while ensuring distributed differential privacy, we provide a new lower bound of privacy budget, which works with an unconditional aggregation sensitivity of the whole distributed system. Moreover, we study the performance of our privacy bound in different scenarios of data updates. Both theoretical and experimental evaluations show that our privacy bound offers better global privacy performance than the existing work.

2018-09-12
Weintraub, E..  2017.  Estimating Target Distribution in security assessment models. 2017 IEEE 2nd International Verification and Security Workshop (IVSW). :82–87.

Organizations are exposed to various cyber-attacks. When a component is exploited, the overall computed damage is impacted by the number of components the network includes. This work is focuses on estimating the Target Distribution characteristic of an attacked network. According existing security assessment models, Target Distribution is assessed by using ordinal values based on users' intuitive knowledge. This work is aimed at defining a formula which enables measuring quantitatively the attacked components' distribution. The proposed formula is based on the real-time configuration of the system. Using the proposed measure, firms can quantify damages, allocate appropriate budgets to actual real risks and build their configuration while taking in consideration the risks impacted by components' distribution. The formula is demonstrated as part of a security continuous monitoring system.

2018-08-23
Vora, Keval, Tian, Chen, Gupta, Rajiv, Hu, Ziang.  2017.  CoRAL: Confined Recovery in Distributed Asynchronous Graph Processing. Proceedings of the Twenty-Second International Conference on Architectural Support for Programming Languages and Operating Systems. :223–236.
Existing distributed asynchronous graph processing systems employ checkpointing to capture globally consistent snapshots and rollback all machines to most recent checkpoint to recover from machine failures. In this paper we argue that recovery in distributed asynchronous graph processing does not require the entire execution state to be rolled back to a globally consistent state due to the relaxed asynchronous execution semantics. We define the properties required in the recovered state for it to be usable for correct asynchronous processing and develop CoRAL, a lightweight checkpointing and recovery algorithm. First, this algorithm carries out confined recovery that only rolls back graph execution states of the failed machines to affect recovery. Second, it relies upon lightweight checkpoints that capture locally consistent snapshots with a reduced peak network bandwidth requirement. Our experiments using real-world graphs show that our technique recovers from failures and finishes processing 1.5x to 3.2x faster compared to the traditional asynchronous checkpointing and recovery mechanism when failures impact 1 to 6 machines of a 16 machine cluster. Moreover, capturing locally consistent snapshots significantly reduces intermittent high peak bandwidth usage required to save the snapshots – the average reduction in 99th percentile bandwidth ranges from 22% to 51% while 1 to 6 snapshot replicas are being maintained.
Ning, F., Wen, Y., Shi, G., Meng, D..  2017.  Efficient tamper-evident logging of distributed systems via concurrent authenticated tree. 2017 IEEE 36th International Performance Computing and Communications Conference (IPCCC). :1–9.
Secure logging as an indispensable part of any secure system in practice is well-understood by both academia and industry. However, providing security for audit logs on an untrusted machine in a large distributed system is still a challenging task. The emergence and wide availability of log management tools prompted plenty of work in the security community that allows clients or auditors to verify integrity of the log data. Most recent solutions to this problem focus on the space-efficiency or public verifiability of forward security. Unfortunately, existing secure audit logging schemes have significant performance limitations that make them impractical for realtime large-scale distributed applications: Existing cryptographic hashing is computationally expensive for logging in task intensive or resource-constrained systems especially to prove individual log events, while Merkle-tree approach has fundamental limitations when face with highly concurrent, large-scale log streams due to its serially appending feature. The verification step of Merkle-tree based approach requiring a logarithmic number of hash computations is becoming a bottleneck to improve the overall performance. There is a huge gap between the flux of log streams collected and the computational efficiency of integrity verification in the large-scale distributed systems. In this work, we develop a novel scheme, performance of which favorably compares with the existing solutions. The performance guarantees that we achieve stem from a novel data structure called concurrent authenticated tree, which allows log events concurrently appending and removes the need to wait for append operations to complete sequentially. We implement a prototype using chameleon hashing based on discrete log and Merkle history tree. A comprehensive experimental evaluation of the proposed and existing approaches is used to validate the analytical models and verify our claims. The results demonstrate that our proposed scheme verifying in a concurrent way is significantly more efficient than the previous tree-based approach.
2018-06-11
Ding, W., Wang, J., Lu, K., Zhao, R., Wang, X., Zhu, Y..  2017.  Optimal Cache Management and Routing for Secure Content Delivery in Information-Centric Networks with Network Coding. 2017 IEEE International Symposium on Parallel and Distributed Processing with Applications and 2017 IEEE International Conference on Ubiquitous Computing and Communications (ISPA/IUCC). :267–274.

Information-Centric Network (ICN) is one of the most promising network architecture to handle the problem of rapid increase of data traffic because it allows in-network cache. ICNs with Linear Network Coding (LNC) can greatly improve the performance of content caching and delivery. In this paper, we propose a Secure Content Caching and Routing (SCCR) framework based on Software Defined Network (SDN) to find the optimal cache management and routing for secure content delivery, which aims to firstly minimize the total cost of cache and bandwidth consumption and then minimize the usage of random chunks to guarantee information theoretical security (ITS). Specifically, we firstly propose the SCCR problem and then introduce the main ideas of the SCCR framework. Next, we formulate the SCCR problem to two Linear Programming (LP) formulations and design the SCCR algorithm based on them to optimally solve the SCCR problem. Finally, extensive simulations are conducted to evaluate the proposed SCCR framework and algorithms.

2018-05-09
Witt, M., Jansen, C., Krefting, D., Streit, A..  2017.  Fine-Grained Supervision and Restriction of Biomedical Applications in Linux Containers. 2017 17th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID). :813–822.

Applications for data analysis of biomedical data are complex programs and often consist of multiple components. Re-usage of existing solutions from external code repositories or program libraries is common in algorithm development. To ease reproducibility as well as transfer of algorithms and required components into distributed infrastructures Linux containers are increasingly used in those environments, that are at least partly connected to the internet. However concerns about the untrusted application remain and are of high interest when medical data is processed. Additionally, the portability of the containers needs to be ensured by using only security technologies, that do not require additional kernel modules. In this paper we describe measures and a solution to secure the execution of an example biomedical application for normalization of multidimensional biosignal recordings. This application, the required runtime environment and the security mechanisms are installed in a Docker-based container. A fine-grained restricted environment (sandbox) for the execution of the application and the prevention of unwanted behaviour is created inside the container. The sandbox is based on the filtering of system calls, as they are required to interact with the operating system to access potentially restricted resources e.g. the filesystem or network. Due to the low-level character of system calls, the creation of an adequate rule set for the sandbox is challenging. Therefore the presented solution includes a monitoring component to collect required data for defining the rules for the application sandbox. Performance evaluation of the application execution shows no significant impact of the resulting sandbox, while detailed monitoring may increase runtime up to over 420%.

2018-02-28
Brodeur, S., Rouat, J..  2017.  Optimality of inference in hierarchical coding for distributed object-based representations. 2017 15th Canadian Workshop on Information Theory (CWIT). :1–5.

Hierarchical approaches for representation learning have the ability to encode relevant features at multiple scales or levels of abstraction. However, most hierarchical approaches exploit only the last level in the hierarchy, or provide a multiscale representation that holds a significant amount of redundancy. We argue that removing redundancy across the multiple levels of abstraction is important for an efficient representation of compositionality in object-based representations. With the perspective of feature learning as a data compression operation, we propose a new greedy inference algorithm for hierarchical sparse coding. Convolutional matching pursuit with a L0-norm constraint was used to encode the input signal into compact and non-redundant codes distributed across levels of the hierarchy. Simple and complex synthetic datasets of temporal signals were created to evaluate the encoding efficiency and compare with the theoretical lower bounds on the information rate for those signals. Empirical evidence have shown that the algorithm is able to infer near-optimal codes for simple signals. However, it failed for complex signals with strong overlapping between objects. We explain the inefficiency of convolutional matching pursuit that occurred in such case. This brings new insights about the NP-hard optimization problem related to using L0-norm constraint in inferring optimally compact and distributed object-based representations.

2018-02-21
Lyu, L., Law, Y. W., Jin, J., Palaniswami, M..  2017.  Privacy-Preserving Aggregation of Smart Metering via Transformation and Encryption. 2017 IEEE Trustcom/BigDataSE/ICESS. :472–479.

This paper proposes a novel privacy-preserving smart metering system for aggregating distributed smart meter data. It addresses two important challenges: (i) individual users wish to publish sensitive smart metering data for specific purposes, and (ii) an untrusted aggregator aims to make queries on the aggregate data. We handle these challenges using two main techniques. First, we propose Fourier Perturbation Algorithm (FPA) and Wavelet Perturbation Algorithm (WPA) which utilize Fourier/Wavelet transformation and distributed differential privacy (DDP) to provide privacy for the released statistic with provable sensitivity and error bounds. Second, we leverage an exponential ElGamal encryption mechanism to enable secure communications between the users and the untrusted aggregator. Standard differential privacy techniques perform poorly for time-series data as it results in a Θ(n) noise to answer n queries, rendering the answers practically useless if n is large. Our proposed distributed differential privacy mechanism relies on Gaussian principles to generate distributed noise, which guarantees differential privacy for each user with O(1) error, and provides computational simplicity and scalability. Compared with Gaussian Perturbation Algorithm (GPA) which adds distributed Gaussian noise to the original data, the experimental results demonstrate the superiority of the proposed FPA and WPA by adding noise to the transformed coefficients.

2017-12-20
Schulz, A., Kotson, M., Meiners, C., Meunier, T., O’Gwynn, D., Trepagnier, P., Weller-Fahy, D..  2017.  Active Dependency Mapping: A Data-Driven Approach to Mapping Dependencies in Distributed Systems. 2017 IEEE International Conference on Information Reuse and Integration (IRI). :84–91.

We introduce Active Dependency Mapping (ADM), a method for establishing dependency relations among a set of interdependent services. The approach is to artificially degrade network performance to infer which assets on the network support a particular process. Artificial degradation of the network environment could be transparent to users; run continuously it could identify dependencies that are rare or occur only at certain timescales. A useful byproduct of this dependency analysis is a quantitative assessment of the resilience and robustness of the system. This technique is intriguing for hardening both enterprise networks and cyber physical systems. We present a proof-of-concept experiment executed on a real-world set of interrelated software services. We assess the efficacy of the approach, discuss current limitations, and suggest options for future development of ADM.

2017-12-12
Rezaeibagha, F., Mu, Y..  2017.  Access Control Policy Combination from Similarity Analysis for Secure Privacy-Preserved EHR Systems. 2017 IEEE Trustcom/BigDataSE/ICESS. :386–393.

In distributed systems, there is often a need to combine the heterogeneous access control policies to offer more comprehensive services to users in the local or national level. A large scale healthcare system is usually distributed in a computer network and might require sophisticated access control policies to protect the system. Therefore, the need for integrating the electronic healthcare systems might be important to provide a comprehensive care for patients while preserving patients' privacy and data security. However, there are major impediments in healthcare systems concerning not well-defined and flexible access control policy implementations, hindering the progress towards secure integrated systems. In this paper, we introduce an access control policy combination framework for EHR systems that preserves patients' privacy and ensures data security. We achieve our goal through an access control mechanism which handles multiple access control policies through a similarity analysis phase. In that phase, we evaluate different XACML policies to decide whether or not a policy combination is applicable. We have provided a case study to show the applicability of our proposed approach based on XACML. Our study results can be applied to the electronic health record (EHR) access control policy, which fosters interoperability and scalability among healthcare providers while preserving patients' privacy and data security. 

Zhou, G., Huang, J. X..  2017.  Modeling and Learning Distributed Word Representation with Metadata for Question Retrieval. IEEE Transactions on Knowledge and Data Engineering. 29:1226–1239.

Community question answering (cQA) has become an important issue due to the popularity of cQA archives on the Web. This paper focuses on addressing the lexical gap problem in question retrieval. Question retrieval in cQA archives aims to find the existing questions that are semantically equivalent or relevant to the queried questions. However, the lexical gap problem brings a new challenge for question retrieval in cQA. In this paper, we propose to model and learn distributed word representations with metadata of category information within cQA pages for question retrieval using two novel category powered models. One is a basic category powered model called MB-NET and the other one is an enhanced category powered model called ME-NET which can better learn the distributed word representations and alleviate the lexical gap problem. To deal with the variable size of word representation vectors, we employ the framework of fisher kernel to transform them into the fixed-length vectors. Experimental results on large-scale English and Chinese cQA data sets show that our proposed approaches can significantly outperform state-of-the-art retrieval models for question retrieval in cQA. Moreover, we further conduct our approaches on large-scale automatic evaluation experiments. The evaluation results show that promising and significant performance improvements can be achieved.

2017-11-03
Xu, X., Pautasso, C., Zhu, L., Gramoli, V., Ponomarev, A., Tran, A. B., Chen, S..  2016.  The Blockchain as a Software Connector. 2016 13th Working IEEE/IFIP Conference on Software Architecture (WICSA). :182–191.

Blockchain is an emerging technology for decentralized and transactional data sharing across a large network of untrusted participants. It enables new forms of distributed software architectures, where components can find agreements on their shared states without trusting a central integration point or any particular participating components. Considering the blockchain as a software connector helps make explicitly important architectural considerations on the resulting performance and quality attributes (for example, security, privacy, scalability and sustainability) of the system. Based on our experience in several projects using blockchain, in this paper we provide rationales to support the architectural decision on whether to employ a decentralized blockchain as opposed to other software solutions, like traditional shared data storage. Additionally, we explore specific implications of using the blockchain as a software connector including design trade-offs regarding quality attributes.

2015-05-06
Hammi, B., Khatoun, R., Doyen, G..  2014.  A Factorial Space for a System-Based Detection of Botcloud Activity. New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on. :1-5.

Today, beyond a legitimate usage, the numerous advantages of cloud computing are exploited by attackers, and Botnets supporting DDoS attacks are among the greatest beneficiaries of this malicious use. Such a phenomena is a major issue since it strongly increases the power of distributed massive attacks while involving the responsibility of cloud service providers that do not own appropriate solutions. In this paper, we present an original approach that enables a source-based de- tection of UDP-flood DDoS attacks based on a distributed system behavior analysis. Based on a principal component analysis, our contribution consists in: (1) defining the involvement of system metrics in a botcoud's behavior, (2) showing the invariability of the factorial space that defines a botcloud activity and (3) among several legitimate activities, using this factorial space to enable a botcloud detection.

Schaefer, J..  2014.  A semantic self-management approach for service platforms. Network Operations and Management Symposium (NOMS), 2014 IEEE. :1-4.

Future personal living environments feature an increasing number of convenience-, health- and security-related applications provided by distributed services, which do not only support users but require tasks such as installation, configuration and continuous administration. These tasks are becoming tiresome, complex and error-prone. One way to escape this situation is to enable service platforms to configure and manage themselves. The approach presented here extends services with semantic descriptions to enable platform-independent autonomous service level management using model driven architecture and autonomic computing concepts. It has been implemented as a OSGi-based semantic autonomic manager, whose concept, prototypical implementation and evaluation are presented.