Visible to the public Biblio

Found 1032 results

Filters: First Letter Of Last Name is V  [Clear All Filters]
2018-12-10
Mathas, Christos M., Segou, Olga E., Xylouris, Georgios, Christinakis, Dimitris, Kourtis, Michail-Alexandros, Vassilakis, Costas, Kourtis, Anastasios.  2018.  Evaluation of Apache Spot's Machine Learning Capabilities in an SDN/NFV Enabled Environment. Proceedings of the 13th International Conference on Availability, Reliability and Security. :52:1–52:10.

Software Defined Networking (SDN) and Network Function Virtualisation (NFV) are transforming modern networks towards a service-oriented architecture. At the same time, the cybersecurity industry is rapidly adopting Machine Learning (ML) algorithms to improve detection and mitigation of complex attacks. Traditional intrusion detection systems perform signature-based detection, based on well-known malicious traffic patterns that signify potential attacks. The main drawback of this method is that attack patterns need to be known in advance and signatures must be preconfigured. Hence, typical systems fail to detect a zero-day attack or an attack with unknown signature. This work considers the use of machine learning for advanced anomaly detection, and specifically deploys the Apache Spot ML framework on an SDN/NFV-enabled testbed running cybersecurity services as Virtual Network Functions (VNFs). VNFs are used to capture traffic for ingestion by the ML algorithm and apply mitigation measures in case of a detected anomaly. Apache Spot utilises Latent Dirichlet Allocation to identify anomalous traffic patterns in Netflow, DNS and proxy data. The overall performance of Apache Spot is evaluated by deploying Denial of Service (Slowloris, BoNeSi) and a Data Exfiltration attack (iodine).

Häuslschmid, Renate, von Bülow, Max, Pfleging, Bastian, Butz, Andreas.  2017.  SupportingTrust in Autonomous Driving. Proceedings of the 22Nd International Conference on Intelligent User Interfaces. :319–329.
Autonomous cars will likely hit the market soon, but trust into such a technology is one of the big discussion points in the public debate. Drivers who have always been in complete control of their car are expected to willingly hand over control and blindly trust a technology that could kill them. We argue that trust in autonomous driving can be increased by means of a driver interface that visualizes the car's interpretation of the current situation and its corresponding actions. To verify this, we compared different visualizations in a user study, overlaid to a driving scene: (1) a chauffeur avatar, (2) a world in miniature, and (3) a display of the car's indicators as the baseline. The world in miniature visualization increased trust the most. The human-like chauffeur avatar can also increase trust, however, we did not find a significant difference between the chauffeur and the baseline.
2018-12-03
Chakrabarti, Somnath, Leslie-Hurd, Rebekah, Vij, Mona, McKeen, Frank, Rozas, Carlos, Caspi, Dror, Alexandrovich, Ilya, Anati, Ittai.  2017.  Intel® Software Guard Extensions (Intel® SGX) Architecture for Oversubscription of Secure Memory in a Virtualized Environment. Proceedings of the Hardware and Architectural Support for Security and Privacy. :7:1–7:8.

As workloads and data move to the cloud, it is essential that software writers are able to protect their applications from untrusted hardware, systems software, and co-tenants. Intel® Software Guard Extensions (SGX) enables a new mode of execution that is protected from attacks in such an environment with strong confidentiality, integrity, and replay protection guarantees. Though SGX supports memory oversubscription via paging, virtualizing the protected memory presents a significant challenge to Virtual Machine Monitor (VMM) writers and comes with a high performance overhead. This paper introduces SGX Oversubscription Extensions that add additional instructions and virtualization support to the SGX architecture so that cloud service providers can oversubscribe secure memory in a less complex and more performant manner.

Bernin, Arne, Müller, Larissa, Ghose, Sobin, von Luck, Kai, Grecos, Christos, Wang, Qi, Vogt, Florian.  2017.  Towards More Robust Automatic Facial Expression Recognition in Smart Environments. Proceedings of the 10th International Conference on PErvasive Technologies Related to Assistive Environments. :37–44.

In this paper, we provide insights towards achieving more robust automatic facial expression recognition in smart environments based on our benchmark with three labeled facial expression databases. These databases are selected to test for desktop, 3D and smart environment application scenarios. This work is meant to provide a neutral comparison and guidelines for developers and researchers interested to integrate facial emotion recognition technologies in their applications, understand its limitations and adaptation as well as enhancement strategies. We also introduce and compare three different metrics for finding the primary expression in a time window of a displayed emotion. In addition, we outline facial emotion recognition limitations and enhancements for smart environments and non-frontal setups. By providing our comparison and enhancements we hope to build a bridge from affective computing research and solution providers to application developers that like to enhance new applications by including emotion based user modeling.

Faria, Diego Resende, Vieira, Mario, Faria, Fernanda C.C..  2017.  Towards the Development of Affective Facial Expression Recognition for Human-Robot Interaction. Proceedings of the 10th International Conference on PErvasive Technologies Related to Assistive Environments. :300–304.

Affective facial expression is a key feature of non-verbal behavior and is considered as a symptom of an internal emotional state. Emotion recognition plays an important role in social communication: human-human and also for human-robot interaction. This work aims at the development of a framework able to recognise human emotions through facial expression for human-robot interaction. Simple features based on facial landmarks distances and angles are extracted to feed a dynamic probabilistic classification framework. The public online dataset Karolinska Directed Emotional Faces (KDEF) [12] is used to learn seven different emotions (e.g. angry, fearful, disgusted, happy, sad, surprised, and neutral) performed by seventy subjects. Offline and on-the-fly tests were carried out: leave-one-out cross validation tests using the dataset and on-the-fly tests during human-robot interactions. Preliminary results show that the proposed framework can correctly recognise human facial expressions with potential to be used in human-robot interaction scenarios.

Deaney, Waleed, Venter, Isabella, Ghaziasgar, Mehrdad, Dodds, Reg.  2017.  A Comparison of Facial Feature Representation Methods for Automatic Facial Expression Recognition. Proceedings of the South African Institute of Computer Scientists and Information Technologists. :10:1–10:10.

A machine translation system that can convert South African Sign Language video to English audio or text and vice versa in real-time would be immensely beneficial to the Deaf and hard of hearing. Sign language gestures are characterised and expressed by five distinct parameters: hand location; hand orientation; hand shape; hand movement and facial expressions. The aim of this research is to recognise facial expressions and to compare the following feature descriptors: local binary patterns; compound local binary patterns and histogram of oriented gradients in two testing environments, a subset of the BU3D-FE dataset and the CK+ dataset. The overall accuracy, accuracy across facial expression classes, robustness to test subjects, and the ability to generalise of each feature descriptor within the context of automatic facial expression recognition are analysed as part of the comparison procedure. Overall, HOG proved to be a more robust feature descriptor to the LBP and CLBP. Furthermore, the CLBP can generally be considered to be superior to the LBP, but the LBP has greater potential in terms of its ability to generalise.

Aylett, Ruth, Broz, Frank, Ghosh, Ayan, McKenna, Peter, Rajendran, Gnanathusharan, Foster, Mary Ellen, Roffo, Giorgio, Vinciarelli, Alessandro.  2017.  Evaluating Robot Facial Expressions. Proceedings of the 19th ACM International Conference on Multimodal Interaction. :516–517.

This paper outlines a demonstration of the work carried out in the SoCoRo project investigating how far a neuro-typical population recognises facial expressions on a non-naturalistic robot face that are designed to show approval and disapproval. RFID-tagged objects are presented to an Emys robot head (called Alyx) and Alyx reacts to each with a facial expression. Participants are asked to put the object in a box marked 'Like' or 'Dislike'. This study is being extended to include assessment of participants' Autism Quotient using a validated questionnaire as a step towards using a robot to help train high-functioning adults with an Autism Spectrum Disorder in social signal recognition.

2018-11-28
Li, Bo, Roundy, Kevin, Gates, Chris, Vorobeychik, Yevgeniy.  2017.  Large-Scale Identification of Malicious Singleton Files. Proceedings of the Seventh ACM on Conference on Data and Application Security and Privacy. :227–238.

We study a dataset of billions of program binary files that appeared on 100 million computers over the course of 12 months, discovering that 94% of these files were present on a single machine. Though malware polymorphism is one cause for the large number of singleton files, additional factors also contribute to polymorphism, given that the ratio of benign to malicious singleton files is 80:1. The huge number of benign singletons makes it challenging to reliably identify the minority of malicious singletons. We present a large-scale study of the properties, characteristics, and distribution of benign and malicious singleton files. We leverage the insights from this study to build a classifier based purely on static features to identify 92% of the remaining malicious singletons at a 1.4% percent false positive rate, despite heavy use of obfuscation and packing techniques by most malicious singleton files that we make no attempt to de-obfuscate. Finally, we demonstrate robustness of our classifier to important classes of automated evasion attacks.

Bortolameotti, Riccardo, van Ede, Thijs, Caselli, Marco, Everts, Maarten H., Hartel, Pieter, Hofstede, Rick, Jonker, Willem, Peter, Andreas.  2017.  DECANTeR: DEteCtion of Anomalous outbouNd HTTP TRaffic by Passive Application Fingerprinting. Proceedings of the 33rd Annual Computer Security Applications Conference. :373–386.

We present DECANTeR, a system to detect anomalous outbound HTTP communication, which passively extracts fingerprints for each application running on a monitored host. The goal of our system is to detect unknown malware and backdoor communication indicated by unknown fingerprints extracted from a host's network traffic. We evaluate a prototype with realistic data from an international organization and datasets composed of malicious traffic. We show that our system achieves a false positive rate of 0.9% for 441 monitored host machines, an average detection rate of 97.7%, and that it cannot be evaded by malware using simple evasion techniques such as using known browser user agent values. We compare our solution with DUMONT [24], the current state-of-the-art IDS which detects HTTP covert communication channels by focusing on benign HTTP traffic. The results show that DECANTeR outperforms DUMONT in terms of detection rate, false positive rate, and even evasion-resistance. Finally, DECANTeR detects 96.8% of information stealers in our dataset, which shows its potential to detect data exfiltration.

Vaziri, Mandana, Mandel, Louis, Shinnar, Avraham, Siméon, Jérôme, Hirzel, Martin.  2017.  Generating Chat Bots from Web API Specifications. Proceedings of the 2017 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software. :44–57.

Companies want to offer chat bots to their customers and employees which can answer questions, enable self-service, and showcase their products and services. Implementing and maintaining chat bots by hand costs time and money. Companies typically have web APIs for their services, which are often documented with an API specification. This paper presents a compiler that takes a web API specification written in Swagger and automatically generates a chat bot that helps the user make API calls. The generated bot is self-documenting, using descriptions from the API specification to answer help requests. Unfortunately, Swagger specifications are not always good enough to generate high-quality chat bots. This paper addresses this problem via a novel in-dialogue curation approach: the power user can improve the generated chat bot by interacting with it. The result is then saved back as an API specification. This paper reports on the design and implementation of the chat bot compiler, the in-dialogue curation, and working case studies.

Vasconcelos, Marisa, Candello, Heloisa, Pinhanez, Claudio, dos Santos, Thiago.  2017.  Bottester: Testing Conversational Systems with Simulated Users. Proceedings of the XVI Brazilian Symposium on Human Factors in Computing Systems. :73:1–73:4.

Recently, conversation agents have attracted the attention of many companies such as IBM, Facebook, Google, and Amazon which have focused on developing tools or API (Application Programming Interfaces) for developers to create their own chat-bots. In this paper, we focus on new approaches to evaluate such systems presenting some recommendations resulted from evaluating a real chatbot use case. Testing conversational agents or chatbots is not a trivial task due to the multitude aspects/tasks (e.g., natural language understanding, dialog management and, response generation) which must be considered separately and as a mixture. Also, the creation of a general testing tool is a challenge since evaluation is very sensitive to the application context. Finally, exhaustive testing can be a tedious task for the project team what creates a need for a tool to perform it automatically. This paper opens a discussion about how conversational systems testing tools are essential to ensure well-functioning of such systems as well as to help interface designers guiding them to develop consistent conversational interfaces.

Hoyle, Roberto, Das, Srijita, Kapadia, Apu, Lee, Adam J., Vaniea, Kami.  2017.  Was My Message Read?: Privacy and Signaling on Facebook Messenger Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. :3838–3842.

Major online messaging services such as Facebook Messenger and WhatsApp are starting to provide users with real-time information about when people read their messages, while useful, the feature has the potential to negatively impact privacy as well as cause concern over access to self. We report on two surveys using Mechanical Turk which looked at senders' (N=402\vphantom\\ use of and reactions to the `message seen' feature, and recipients' (N=316) privacy and signaling behaviors in the face of such visibility. Our findings indicate that senders experience a range of emotions when their message is not read, or is read but not answered immediately. Recipients also engage in various signaling behaviors in the face of visibility by both replying or not replying immediately.

2018-11-19
Venkatesan, Sridhar, Albanese, Massimiliano, Shah, Ankit, Ganesan, Rajesh, Jajodia, Sushil.  2017.  Detecting Stealthy Botnets in a Resource-Constrained Environment Using Reinforcement Learning. Proceedings of the 2017 Workshop on Moving Target Defense. :75–85.

Modern botnets can persist in networked systems for extended periods of time by operating in a stealthy manner. Despite the progress made in the area of botnet prevention, detection, and mitigation, stealthy botnets continue to pose a significant risk to enterprises. Furthermore, existing enterprise-scale solutions require significant resources to operate effectively, thus they are not practical. In order to address this important problem in a resource-constrained environment, we propose a reinforcement learning based approach to optimally and dynamically deploy a limited number of defensive mechanisms, namely honeypots and network-based detectors, within the target network. The ultimate goal of the proposed approach is to reduce the lifetime of stealthy botnets by maximizing the number of bots identified and taken down through a sequential decision-making process. We provide a proof-of-concept of the proposed approach, and study its performance in a simulated environment. The results show that the proposed approach is promising in protecting against stealthy botnets.

Charikar, Moses, Steinhardt, Jacob, Valiant, Gregory.  2017.  Learning from Untrusted Data. Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing. :47–60.

The vast majority of theoretical results in machine learning and statistics assume that the training data is a reliable reflection of the phenomena to be learned. Similarly, most learning techniques used in practice are brittle to the presence of large amounts of biased or malicious data. Motivated by this, we consider two frameworks for studying estimation, learning, and optimization in the presence of significant fractions of arbitrary data. The first framework, list-decodable learning, asks whether it is possible to return a list of answers such that at least one is accurate. For example, given a dataset of n points for which an unknown subset of $\alpha$n points are drawn from a distribution of interest, and no assumptions are made about the remaining (1 - $\alpha$)n points, is it possible to return a list of poly(1/$\alpha$) answers? The second framework, which we term the semi-verified model, asks whether a small dataset of trusted data (drawn from the distribution in question) can be used to extract accurate information from a much larger but untrusted dataset (of which only an $\alpha$-fraction is drawn from the distribution). We show strong positive results in both settings, and provide an algorithm for robust learning in a very general stochastic optimization setting. This result has immediate implications for robustly estimating the mean of distributions with bounded second moments, robustly learning mixtures of such distributions, and robustly finding planted partitions in random graphs in which significant portions of the graph have been perturbed by an adversary.

Vu, Ly, Bui, Cong Thanh, Nguyen, Quang Uy.  2017.  A Deep Learning Based Method for Handling Imbalanced Problem in Network Traffic Classification. Proceedings of the Eighth International Symposium on Information and Communication Technology. :333–339.

Network traffic classification is an important problem in network traffic analysis. It plays a vital role in many network tasks including quality of service, firewall enforcement and security. One of the challenging problems of classifying network traffic is the imbalanced property of network data. Usually, the amount of traffic in some classes is much higher than the amount of traffic in other classes. In this paper, we proposed an application of a deep learning approach to address imbalanced data problem in network traffic classification. We used a recent proposed deep network for unsupervised learning called Auxiliary Classifier Generative Adversarial Network to generate synthesized data samples for balancing between the minor and the major classes. We tested our method on a well-known network traffic dataset and the results showed that our proposed method achieved better performance compared to a recent proposed method for handling imbalanced problem in network traffic classification.

Lal, Shamit, Garg, Vineet, Verma, Om Prakash.  2017.  Automatic Image Colorization Using Adversarial Training. Proceedings of the 9th International Conference on Signal Processing Systems. :84–88.

The paper presents a fully automatic end-to-end trainable system to colorize grayscale images. Colorization is a highly under-constrained problem. In order to produce realistic outputs, the proposed approach takes advantage of the recent advances in deep learning and generative networks. To achieve plausible colorization, the paper investigates conditional Wasserstein Generative Adversarial Networks (WGAN) [3] as a solution to this problem. Additionally, a loss function consisting of two classification loss components apart from the adversarial loss learned by the WGAN is proposed. The first classification loss provides a measure of how much the predicted colored images differ from ground truth. The second classification loss component makes use of ground truth semantic classification labels in order to learn meaningful intermediate features. Finally, WGAN training procedure pushes the predictions to the manifold of natural images. The system is validated using a user study and a semantic interpretability test and achieves results comparable to [1] on Imagenet dataset [10].

Sempreboni, Diego, Viganò, Luca.  2018.  MMM: May I Mine Your Mind. Companion Proceedings of the The Web Conference 2018. :1573–1576.

Consider the following set-up for the plot of a possible future episode of the TV series Black Mirror: human brains can be connected directly to the net and MiningMind Inc. has developed a technology that merges a reward system with a cryptojacking engine that uses the human brain to mine cryptocurrency (or to carry out some other mining activity). Part of our brain will be committed to cryptographic calculations (mining), leaving the remaining part untouched for everyday operations, i.e., for our brain's normal daily activity. In this short paper, we briefly argue why this set-up might not be so far fetched after all, and explore the impact that such a technology could have on our lives and our society. This article is summarized in: the morning paper an interesting/influential/important paper from the world of CS every weekday morning, as selected by Adrian Colyer

Konoth, Radhesh Krishnan, Vineti, Emanuele, Moonsamy, Veelasha, Lindorfer, Martina, Kruegel, Christopher, Bos, Herbert, Vigna, Giovanni.  2018.  MineSweeper: An In-Depth Look into Drive-by Cryptocurrency Mining and Its Defense. Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. :1714–1730.

A wave of alternative coins that can be effectively mined without specialized hardware, and a surge in cryptocurrencies' market value has led to the development of cryptocurrency mining ( cryptomining ) services, such as Coinhive, which can be easily integrated into websites to monetize the computational power of their visitors. While legitimate website operators are exploring these services as an alternative to advertisements, they have also drawn the attention of cybercriminals: drive-by mining (also known as cryptojacking ) is a new web-based attack, in which an infected website secretly executes JavaScript code and/or a WebAssembly module in the user's browser to mine cryptocurrencies without her consent. In this paper, we perform a comprehensive analysis on Alexa's Top 1 Million websites to shed light on the prevalence and profitability of this attack. We study the websites affected by drive-by mining to understand the techniques being used to evade detection, and the latest web technologies being exploited to efficiently mine cryptocurrency. As a result of our study, which covers 28 Coinhive-like services that are widely being used by drive-by mining websites, we identified 20 active cryptomining campaigns. Motivated by our findings, we investigate possible countermeasures against this type of attack. We discuss how current blacklisting approaches and heuristics based on CPU usage are insufficient, and present MineSweeper, a novel detection technique that is based on the intrinsic characteristics of cryptomining code, and, thus, is resilient to obfuscation. Our approach could be integrated into browsers to warn users about silent cryptomining when visiting websites that do not ask for their consent.

Shoshitaishvili, Yan, Weissbacher, Michael, Dresel, Lukas, Salls, Christopher, Wang, Ruoyu, Kruegel, Christopher, Vigna, Giovanni.  2017.  Rise of the HaCRS: Augmenting Autonomous Cyber Reasoning Systems with Human Assistance. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. :347–362.

Software permeates every aspect of our world, from our homes to the infrastructure that provides mission-critical services. As the size and complexity of software systems increase, the number and sophistication of software security flaws increase as well. The analysis of these flaws began as a manual approach, but it soon became apparent that a manual approach alone cannot scale, and that tools were necessary to assist human experts in this task, resulting in a number of techniques and approaches that automated certain aspects of the vulnerability analysis process. Recently, DARPA carried out the Cyber Grand Challenge, a competition among autonomous vulnerability analysis systems designed to push the tool-assisted human-centered paradigm into the territory of complete automation, with the hope that, by removing the human factor, the analysis would be able to scale to new heights. However, when the autonomous systems were pitted against human experts it became clear that certain tasks, albeit simple, could not be carried out by an autonomous system, as they require an understanding of the logic of the application under analysis. Based on this observation, we propose a shift in the vulnerability analysis paradigm, from tool-assisted human-centered to human-assisted tool-centered. In this paradigm, the automated system orchestrates the vulnerability analysis process, and leverages humans (with different levels of expertise) to perform well-defined sub-tasks, whose results are integrated in the analysis. As a result, it is possible to scale the analysis to a larger number of programs, and, at the same time, optimize the use of expensive human resources. In this paper, we detail our design for a human-assisted automated vulnerability analysis system, describe its implementation atop an open-sourced autonomous vulnerability analysis system that participated in the Cyber Grand Challenge, and evaluate and discuss the significant improvements that non-expert human assistance can offer to automated analysis approaches.

2018-10-26
Yao, Yuanshun, Viswanath, Bimal, Cryan, Jenna, Zheng, Haitao, Zhao, Ben Y..  2017.  Automated Crowdturfing Attacks and Defenses in Online Review Systems. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. :1143–1158.

Malicious crowdsourcing forums are gaining traction as sources of spreading misinformation online, but are limited by the costs of hiring and managing human workers. In this paper, we identify a new class of attacks that leverage deep learning language models (Recurrent Neural Networks or RNNs) to automate the generation of fake online reviews for products and services. Not only are these attacks cheap and therefore more scalable, but they can control rate of content output to eliminate the signature burstiness that makes crowdsourced campaigns easy to detect. Using Yelp reviews as an example platform, we show how a two phased review generation and customization attack can produce reviews that are indistinguishable by state-of-the-art statistical detectors. We conduct a survey-based user study to show these reviews not only evade human detection, but also score high on "usefulness" metrics by users. Finally, we develop novel automated defenses against these attacks, by leveraging the lossy transformation introduced by the RNN training and generation cycle. We consider countermeasures against our mechanisms, show that they produce unattractive cost-benefit tradeoffs for attackers, and that they can be further curtailed by simple constraints imposed by online service providers.

Vorobiev, E. G., Petrenko, S. A., Kovaleva, I. V., Abrosimov, I. K..  2017.  Analysis of computer security incidents using fuzzy logic. 2017 XX IEEE International Conference on Soft Computing and Measurements (SCM). :369–371.

The work proposes and justifies a processing algorithm of computer security incidents based on the author's signatures of cyberattacks. Attention is also paid to the design pattern SOPKA based on the Russian ViPNet technology. Recommendations are made regarding the establishment of the corporate segment SOPKA, which meets the requirements of Presidential Decree of January 15, 2013 number 31c “On the establishment of the state system of detection, prevention and elimination of the consequences of cyber-attacks on information resources of the Russian Federation” and “Concept of the state system of detection, prevention and elimination of the consequences of cyber-attacks on information resources of the Russian Federation” approved by the President of the Russian Federation on December 12, 2014, No K 1274.

Pfister, J., Gomes, M. A. C., Vilela, J. P., Harrison, W. K..  2017.  Quantifying equivocation for finite blocklength wiretap codes. 2017 IEEE International Conference on Communications (ICC). :1–6.

This paper presents a new technique for providing the analysis and comparison of wiretap codes in the small blocklength regime over the binary erasure wiretap channel. A major result is the development of Monte Carlo strategies for quantifying a code's equivocation, which mirrors techniques used to analyze forward error correcting codes. For this paper, we limit our analysis to coset-based wiretap codes, and give preferred strategies for calculating and/or estimating the equivocation in order of preference. We also make several comparisons of different code families. Our results indicate that there are security advantages to using algebraic codes for applications that require small to medium blocklengths.

Chaudhry, J., Saleem, K., Islam, R., Selamat, A., Ahmad, M., Valli, C..  2017.  AZSPM: Autonomic Zero-Knowledge Security Provisioning Model for Medical Control Systems in Fog Computing Environments. 2017 IEEE 42nd Conference on Local Computer Networks Workshops (LCN Workshops). :121–127.

The panic among medical control, information, and device administrators is due to surmounting number of high-profile attacks on healthcare facilities. This hostile situation is going to lead the health informatics industry to cloud-hoarding of medical data, control flows, and site governance. While different healthcare enterprises opt for cloud-based solutions, it is a matter of time when fog computing environment are formed. Because of major gaps in reported techniques for fog security administration for health data i.e. absence of an overarching certification authority (CA), the security provisioning is one of the the issue that we address in this paper. We propose a security provisioning model (AZSPM) for medical devices in fog environments. We propose that the AZSPM can be build by using atomic security components that are dynamically composed. The verification of authenticity of the atomic components, for trust sake, is performed by calculating the processor clock cycles from service execution at the resident hardware platform. This verification is performed in the fully sand boxed environment. The results of the execution cycles are matched with the service specifications from the manufacturer before forwarding the mobile services to the healthcare cloud-lets. The proposed model is completely novel in the fog computing environments. We aim at building the prototype based on this model in a healthcare information system environment.

2018-09-28
van Oorschot, Paul C..  2017.  Science, Security and Academic Literature: Can We Learn from History? Proceedings of the 2017 Workshop on Moving Target Defense. :1–2.
A recent paper (Oakland 2017) discussed science and security research in the context of the government-funded Science of Security movement, and the history and prospects of security as a scientific pursuit. It drew on literature from within the security research community, and mature history and philosophy of science literature. The paper sparked debate in numerous organizations and the security community. Here we consider some of the main ideas, provide a summary list of relevant literature, and encourage discussion within the Moving Target Defense (MTD) sub-community1.
2018-09-12
Veloudis, Simeon, Paraskakis, Iraklis, Petsos, Christos.  2017.  An Ontological Framework for Determining the Repercussions of Retirement Actions Targeted at Complex Access Control Policies in Cloud Environments. Companion Proceedings of the10th International Conference on Utility and Cloud Computing. :21–28.
By migrating their data and operations to the cloud, enterprises are able to gain significant benefits in terms of cost savings, increased availability, agility and productivity. Yet, the shared and on-demand nature of the cloud paradigm introduces a new breed of security threats that generally deter stakeholders from relinquishing control of their critical assets to third-party cloud providers. One way to thwart these threats is to instill suitable access control policies into cloud services that protect these assets. Nevertheless, the dynamic nature of cloud environments calls for policies that are able to incorporate a potentially complex body of contextual knowledge. This complexity is further amplified by the interplay that inevitably occurs between the different policies, as well as by the dynamically-evolving nature of an organisation's business and security needs. We argue that one way to tame this complexity is to devise a generic framework that facilitates the governance of policies. This paper presents a particular aspect of such a framework, namely an approach to determining the repercussions that policy retirement actions have on the overall protection of critical assets in the cloud.