Visible to the public Biblio

Found 721 results

Filters: Keyword is Computational modeling  [Clear All Filters]
2018-01-23
Nakhla, N., Perrett, K., McKenzie, C..  2017.  Automated computer network defence using ARMOUR: Mission-oriented decision support and vulnerability mitigation. 2017 International Conference On Cyber Situational Awareness, Data Analytics And Assessment (Cyber SA). :1–8.

Mission assurance requires effective, near-real time defensive cyber operations to appropriately respond to cyber attacks, without having a significant impact on operations. The ability to rapidly compute, prioritize and execute network-based courses of action (CoAs) relies on accurate situational awareness and mission-context information. Although diverse solutions exist for automatically collecting and analysing infrastructure data, few deliver automated analysis and implementation of network-based CoAs in the context of the ongoing mission. In addition, such processes can be operatorintensive and available tools tend to be specific to a set of common data sources and network responses. To address these issues, Defence Research and Development Canada (DRDC) is leading the development of the Automated Computer Network Defence (ARMOUR) technology demonstrator and cyber defence science and technology (S&T) platform. ARMOUR integrates new and existing off-the-shelf capabilities to provide enhanced decision support and to automate many of the tasks currently executed manually by network operators. This paper describes the cyber defence integration framework, situational awareness, and automated mission-oriented decision support that ARMOUR provides.

Acar, A., Celik, Z. B., Aksu, H., Uluagac, A. S., McDaniel, P..  2017.  Achieving Secure and Differentially Private Computations in Multiparty Settings. 2017 IEEE Symposium on Privacy-Aware Computing (PAC). :49–59.

Sharing and working on sensitive data in distributed settings from healthcare to finance is a major challenge due to security and privacy concerns. Secure multiparty computation (SMC) is a viable panacea for this, allowing distributed parties to make computations while the parties learn nothing about their data, but the final result. Although SMC is instrumental in such distributed settings, it does not provide any guarantees not to leak any information about individuals to adversaries. Differential privacy (DP) can be utilized to address this; however, achieving SMC with DP is not a trivial task, either. In this paper, we propose a novel Secure Multiparty Distributed Differentially Private (SM-DDP) protocol to achieve secure and private computations in a multiparty environment. Specifically, with our protocol, we simultaneously achieve SMC and DP in distributed settings focusing on linear regression on horizontally distributed data. That is, parties do not see each others’ data and further, can not infer information about individuals from the final constructed statistical model. Any statistical model function that allows independent calculation of local statistics can be computed through our protocol. The protocol implements homomorphic encryption for SMC and functional mechanism for DP to achieve the desired security and privacy guarantees. In this work, we first introduce the theoretical foundation for the SM-DDP protocol and then evaluate its efficacy and performance on two different datasets. Our results show that one can achieve individual-level privacy through the proposed protocol with distributed DP, which is independently applied by each party in a distributed fashion. Moreover, our results also show that the SM-DDP protocol incurs minimal computational overhead, is scalable, and provides security and privacy guarantees.

Moghaddam, F. F., Wieder, P., Yahyapour, R..  2017.  A policy-based identity management schema for managing accesses in clouds. 2017 8th International Conference on the Network of the Future (NOF). :91–98.

Security challenges are the most important obstacles for the advancement of IT-based on-demand services and cloud computing as an emerging technology. Lack of coincidence in identity management models based on defined policies and various security levels in different cloud servers is one of the most challenging issues in clouds. In this paper, a policy- based user authentication model has been presented to provide a reliable and scalable identity management and to map cloud users' access requests with defined polices of cloud servers. In the proposed schema several components are provided to define access policies by cloud servers, to apply policies based on a structural and reliable ontology, to manage user identities and to semantically map access requests by cloud users with defined polices. Finally, the reliability and efficiency of this policy-based authentication schema have been evaluated by scientific performance, security and competitive analysis. Overall, the results show that this model has met defined demands of the research to enhance the reliability and efficiency of identity management in cloud computing environments.

Falk, E., Repcek, S., Fiz, B., Hommes, S., State, R., Sasnauskas, R..  2017.  VSOC - A Virtual Security Operating Center. GLOBECOM 2017 - 2017 IEEE Global Communications Conference. :1–6.

Security in virtualised environments is becoming increasingly important for institutions, not only for a firm's own on-site servers and network but also for data and sites that are hosted in the cloud. Today, security is either handled globally by the cloud provider, or each customer needs to invest in its own security infrastructure. This paper proposes a Virtual Security Operation Center (VSOC) that allows to collect, analyse and visualize security related data from multiple sources. For instance, a user can forward log data from its firewalls, applications and routers in order to check for anomalies and other suspicious activities. The security analytics provided by the VSOC are comparable to those of commercial security incident and event management (SIEM) solutions, but are deployed as a cloud-based solution with the additional benefit of using big data processing tools to handle large volumes of data. This allows us to detect more complex attacks that cannot be detected with todays signature-based (i.e. rules) SIEM solutions.

Wang, B., Song, W., Lou, W., Hou, Y. T..  2017.  Privacy-preserving pattern matching over encrypted genetic data in cloud computing. IEEE INFOCOM 2017 - IEEE Conference on Computer Communications. :1–9.

Personalized medicine performs diagnoses and treatments according to the DNA information of the patients. The new paradigm will change the health care model in the future. A doctor will perform the DNA sequence matching instead of the regular clinical laboratory tests to diagnose and medicate the diseases. Additionally, with the help of the affordable personal genomics services such as 23andMe, personalized medicine will be applied to a great population. Cloud computing will be the perfect computing model as the volume of the DNA data and the computation over it are often immense. However, due to the sensitivity, the DNA data should be encrypted before being outsourced into the cloud. In this paper, we start from a practical system model of the personalize medicine and present a solution for the secure DNA sequence matching problem in cloud computing. Comparing with the existing solutions, our scheme protects the DNA data privacy as well as the search pattern to provide a better privacy guarantee. We have proved that our scheme is secure under the well-defined cryptographic assumption, i.e., the sub-group decision assumption over a bilinear group. Unlike the existing interactive schemes, our scheme requires only one round of communication, which is critical in practical application scenarios. We also carry out a simulation study using the real-world DNA data to evaluate the performance of our scheme. The simulation results show that the computation overhead for real world problems is practical, and the communication cost is small. Furthermore, our scheme is not limited to the genome matching problem but it applies to general privacy preserving pattern matching problems which is widely used in real world.

2018-01-16
Feng, X., Zheng, Z., Cansever, D., Swami, A., Mohapatra, P..  2017.  A signaling game model for moving target defense. IEEE INFOCOM 2017 - IEEE Conference on Computer Communications. :1–9.

Incentive-driven advanced attacks have become a major concern to cyber-security. Traditional defense techniques that adopt a passive and static approach by assuming a fixed attack type are insufficient in the face of highly adaptive and stealthy attacks. In particular, a passive defense approach often creates information asymmetry where the attacker knows more about the defender. To this end, moving target defense (MTD) has emerged as a promising way to reverse this information asymmetry. The main idea of MTD is to (continuously) change certain aspects of the system under control to increase the attacker's uncertainty, which in turn increases attack cost/complexity and reduces the chance of a successful exploit in a given amount of time. In this paper, we go one step beyond and show that MTD can be further improved when combined with information disclosure. In particular, we consider that the defender adopts a MTD strategy to protect a critical resource across a network of nodes, and propose a Bayesian Stackelberg game model with the defender as the leader and the attacker as the follower. After fully characterizing the defender's optimal migration strategies, we show that the defender can design a signaling scheme to exploit the uncertainty created by MTD to further affect the attacker's behavior for its own advantage. We obtain conditions under which signaling is useful, and show that strategic information disclosure can be a promising way to further reverse the information asymmetry and achieve more efficient active defense.

Kansal, V., Dave, M..  2017.  DDoS attack isolation using moving target defense. 2017 International Conference on Computing, Communication and Automation (ICCCA). :511–514.

Among the several threats to cyber services Distributed denial-of-service (DDoS) attack is most prevailing nowadays. DDoS involves making an online service unavailable by flooding the bandwidth or resources of a targeted system. It is easier for an insider having legitimate access to the system to circumvent any security controls thus resulting in insider attack. To mitigate insider assisted DDoS attacks, this paper proposes a moving target defense mechanism that involves isolation of insiders from innocent clients by using attack proxies. Further using the concept of load balancing an effective algorithm to detect and handle insider attack is developed with the aim of maximizing attack isolation while minimizing the total number of proxies used.

Rukavitsyn, A., Borisenko, K., Shorov, A..  2017.  Self-learning method for DDoS detection model in cloud computing. 2017 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (EIConRus). :544–547.

Cloud Computing has many significant benefits like the provision of computing resources and virtual networks on demand. However, there is the problem to assure the security of these networks against Distributed Denial-of-Service (DDoS) attack. Over the past few decades, the development of protection method based on data mining has attracted many researchers because of its effectiveness and practical significance. Most commonly these detection methods use prelearned models or models based on rules. Because of this the proposed DDoS detection methods often failure in dynamically changing cloud virtual networks. In this paper, we purposed self-learning method allows to adapt a detection model to network changes. This is minimized the false detection and reduce the possibility to mark legitimate users as malicious and vice versa. The developed method consists of two steps: collecting data about the network traffic by Netflow protocol and relearning the detection model with the new data. During the data collection we separate the traffic on legitimate and malicious. The separated traffic is labeled and sent to the relearning pool. The detection model is relearned by a data from the pool of current traffic. The experiment results show that proposed method could increase efficiency of DDoS detection systems is using data mining.

2018-01-10
Zheng, Y., Shi, Y., Guo, K., Li, W., Zhu, L..  2017.  Enhanced word embedding with multiple prototypes. 2017 4th International Conference on Industrial Economics System and Industrial Security Engineering (IEIS). :1–5.

Word representation is one of the basic word repressentation methods in natural language processing, which mapped a word into a dense real-valued vector space based on a hypothesis: words with similar context have similar meanings. Models like NNLM, C&W, CBOW, Skip-gram have been designed for word embeddings learning, and get widely used in many NLP tasks. However, these models assume that one word had only one semantics meaning which is contrary to the real language rules. In this paper we pro-pose a new word unit with multiple meanings and an algorithm to distinguish them by it's context. This new unit can be embedded in most language models and get series of efficient representations by learning variable embeddings. We evaluate a new model MCBOW that integrate CBOW with our word unit on word similarity evaluation task and some downstream experiments, the result indicated our new model can learn different meanings of a word and get a better result on some other tasks.

2017-12-28
Vizarreta, P., Heegaard, P., Helvik, B., Kellerer, W., Machuca, C. M..  2017.  Characterization of failure dynamics in SDN controllers. 2017 9th International Workshop on Resilient Networks Design and Modeling (RNDM). :1–7.

With Software Defined Networking (SDN) the control plane logic of forwarding devices, switches and routers, is extracted and moved to an entity called SDN controller, which acts as a broker between the network applications and physical network infrastructure. Failures of the SDN controller inhibit the network ability to respond to new application requests and react to events coming from the physical network. Despite of the huge impact that a controller has on the network performance as a whole, a comprehensive study on its failure dynamics is still missing in the state of the art literature. The goal of this paper is to analyse, model and evaluate the impact that different controller failure modes have on its availability. A model in the formalism of Stochastic Activity Networks (SAN) is proposed and applied to a case study of a hypothetical controller based on commercial controller implementations. In case study we show how the proposed model can be used to estimate the controller steady state availability, quantify the impact of different failure modes on controller outages, as well as the effects of software ageing, and impact of software reliability growth on the transient behaviour.

Kabiri, M. N., Wannous, M..  2017.  An Experimental Evaluation of a Cloud-Based Virtual Computer Laboratory Using Openstack. 2017 6th IIAI International Congress on Advanced Applied Informatics (IIAI-AAI). :667–672.

In previous work, we proposed a solution to facilitate access to computer science related courses and learning materials using cloud computing and mobile technologies. The solution was positively evaluated by the participants, but most of them indicated that it lacks support for laboratory activities. As it is well known that many of computer science subjects (e.g. Computer Networks, Information Security, Systems Administration, etc.) require a suitable and flexible environment where students can access a set of computers and network devices to successfully complete their hands-on activities. To achieve this criteria, we created a cloud-based virtual laboratory based on OpenStack cloud platform to facilitate access to virtual machine both locally and remotely. Cloud-based virtual labs bring a lot of advantages, such as increased manageability, scalability, high availability and flexibility, to name a few. This arrangement has been tested in a case-study exercise with a group of students as part of Computer Networks and System Administration courses at Kabul Polytechnic University in Afghanistan. To measure success, we introduced a level test to be completed by participants prior and after the experiment. As a result, the learners achieved an average of 17.1 % higher scores in the post level test after completing the practical exercises. Lastly, we distributed a questionnaire after the experiment and students provided positive feedback on the effectiveness and usefulness of the proposed solution.

Noureddine, M. A., Marturano, A., Keefe, K., Bashir, M., Sanders, W. H..  2017.  Accounting for the Human User in Predictive Security Models. 2017 IEEE 22nd Pacific Rim International Symposium on Dependable Computing (PRDC). :329–338.

Given the growing sophistication of cyber attacks, designing a perfectly secure system is not generally possible. Quantitative security metrics are thus needed to measure and compare the relative security of proposed security designs and policies. Since the investigation of security breaches has shown a strong impact of human errors, ignoring the human user in computing these metrics can lead to misleading results. Despite this, and although security researchers have long observed the impact of human behavior on system security, few improvements have been made in designing systems that are resilient to the uncertainties in how humans interact with a cyber system. In this work, we develop an approach for including models of user behavior, emanating from the fields of social sciences and psychology, in the modeling of systems intended to be secure. We then illustrate how one of these models, namely general deterrence theory, can be used to study the effectiveness of the password security requirements policy and the frequency of security audits in a typical organization. Finally, we discuss the many challenges that arise when adopting such a modeling approach, and then present our recommendations for future work.

Zheng, J., Okamura, H., Dohi, T..  2016.  Mean Time to Security Failure of VM-Based Intrusion Tolerant Systems. 2016 IEEE 36th International Conference on Distributed Computing Systems Workshops (ICDCSW). :128–133.

Computer systems face the threat of deliberate security intrusions due to malicious attacks that exploit security holes or vulnerabilities. In practice, these security holes or vulnerabilities still remain in the system and applications even if developers carefully execute system testing. Thus it is necessary and important to develop the mechanism to prevent and/or tolerate security intrusions. As a result, the computer systems are often evaluated with confidentiality, integrity and availability (CIA) criteria from the viewpoint of security, and security is treated as a QoS (Quality of Service) attribute at par with other QoS attributes such as capacity and performance. In this paper, we present the method for quantifying a security attribute called mean time to security failure (MTTSF) of a VM-based intrusion tolerant system based on queueing theory.

Vu, Q. H., Ruta, D., Cen, L..  2017.  An ensemble model with hierarchical decomposition and aggregation for highly scalable and robust classification. 2017 Federated Conference on Computer Science and Information Systems (FedCSIS). :149–152.

This paper introduces an ensemble model that solves the binary classification problem by incorporating the basic Logistic Regression with the two recent advanced paradigms: extreme gradient boosted decision trees (xgboost) and deep learning. To obtain the best result when integrating sub-models, we introduce a solution to split and select sets of features for the sub-model training. In addition to the ensemble model, we propose a flexible robust and highly scalable new scheme for building a composite classifier that tries to simultaneously implement multiple layers of model decomposition and outputs aggregation to maximally reduce both bias and variance (spread) components of classification errors. We demonstrate the power of our ensemble model to solve the problem of predicting the outcome of Hearthstone, a turn-based computer game, based on game state information. Excellent predictive performance of our model has been acknowledged by the second place scored in the final ranking among 188 competing teams.

2017-12-20
Fang, Y., Dickerson, S. J..  2017.  Achieving Swarm Intelligence with Spiking Neural Oscillators. 2017 IEEE International Conference on Rebooting Computing (ICRC). :1–4.

Mimicking the collaborative behavior of biological swarms, such as bird flocks and ant colonies, Swarm Intelligence algorithms provide efficient solutions for various optimization problems. On the other hand, a computational model of the human brain, spiking neural networks, has been showing great promise in recognition, inference, and learning, due to recent emergence of neuromorphic hardware for high-efficient and low-power computing. Through bridging these two distinct research fields, we propose a novel computing paradigm that implements the swarm intelligence with a population of coupled spiking neural oscillators in basic leaky integrate-and-fire (LIF) model. Our model behaves as a meta-heuristic searching conducted by multiple collaborative agents. In this design, the oscillating neurons serve as agents in the swarm, search for solutions in frequency coding and communicate with each other through spikes. The firing rate of each agent is adaptive to other agents with better solutions and the optimal solution is rendered as the swarm synchronization is reached. We apply the proposed method to the parameter optimization in several test objective functions and demonstrate its effectiveness and efficiency. Our new computing paradigm expands the computational power of coupled spiking neurons in the field of solving optimization problem and brings opportunities for the connection between individual intelligence and swarm intelligence.

Williams, N., Li, S..  2017.  Simulating Human Detection of Phishing Websites: An Investigation into the Applicability of the ACT-R Cognitive Behaviour Architecture Model. 2017 3rd IEEE International Conference on Cybernetics (CYBCONF). :1–8.

The prevalence and effectiveness of phishing attacks, despite the presence of a vast array of technical defences, are due largely to the fact that attackers are ruthlessly targeting what is often referred to as the weakest link in the system - the human. This paper reports the results of an investigation into how end users behave when faced with phishing websites and how this behaviour exposes them to attack. Specifically, the paper presents a proof of concept computer model for simulating human behaviour with respect to phishing website detection based on the ACT-R cognitive architecture, and draws conclusions as to the applicability of this architecture to human behaviour modelling within a phishing detection scenario. Following the development of a high-level conceptual model of the phishing website detection process, the study draws upon ACT-R to model and simulate the cognitive processes involved in judging the validity of a representative webpage based primarily around the characteristics of the HTTPS padlock security indicator. The study concludes that despite the low-level nature of the architecture and its very basic user interface support, ACT-R possesses strong capabilities which map well onto the phishing use case, and that further work to more fully represent the range of human security knowledge and behaviours in an ACT-R model could lead to improved insights into how best to combine technical and human defences to reduce the risk to end users from phishing attacks.

Comon, H., Koutsos, A..  2017.  Formal Computational Unlinkability Proofs of RFID Protocols. 2017 IEEE 30th Computer Security Foundations Symposium (CSF). :100–114.

We set up a framework for the formal proofs of RFID protocols in the computational model. We rely on the so-called computationally complete symbolic attacker model. Our contributions are: 1) to design (and prove sound) axioms reflecting the properties of hash functions (Collision-Resistance, PRF). 2) to formalize computational unlinkability in the model. 3) to illustrate the method, providing the first formal proofs of unlinkability of RFID protocols, in the omputational model.

Nguyen, C. T., Hoang, T. T., Phan, V. X..  2017.  A simple method for anonymous tag cardinality estimation in RFID systems with false detection. 2017 4th NAFOSTED Conference on Information and Computer Science. :101–104.

This work investigates the anonymous tag cardinality estimation problem in radio frequency identification systems with frame slotted aloha-based protocol. Each tag, instead of sending its identity upon receiving the reader's request, randomly responds by only one bit in one of the time slots of the frame due to privacy and security. As a result, each slot with no response is observed as in an empty state, while the others are non-empty. Those information can be used for the tag cardinality estimation. Nevertheless, under effects of fading and noise, time slots with tags' response might be observed as empty, while those with no response might be detected as non-empty, which is known as a false detection phenomenon. The performance of conventional estimation methods is, thus, degraded because of inaccurate observations. In order to cope with this issue, we propose a new estimation algorithm using expectation-maximization method. Both the tag cardinality and a probability of false detection are iteratively estimated to maximize a likelihood function. Computer simulations will be provided to show the merit of the proposed method.

Le, T. A., Baydin, A. G., Zinkov, R., Wood, F..  2017.  Using synthetic data to train neural networks is model-based reasoning. 2017 International Joint Conference on Neural Networks (IJCNN). :3514–3521.
We draw a formal connection between using synthetic training data to optimize neural network parameters and approximate, Bayesian, model-based reasoning. In particular, training a neural network using synthetic data can be viewed as learning a proposal distribution generator for approximate inference in the synthetic-data generative model. We demonstrate this connection in a recognition task where we develop a novel Captcha-breaking architecture and train it using synthetic data, demonstrating both state-of-the-art performance and a way of computing task-specific posterior uncertainty. Using a neural network trained this way, we also demonstrate successful breaking of real-world Captchas currently used by Facebook and Wikipedia. Reasoning from these empirical results and drawing connections with Bayesian modeling, we discuss the robustness of synthetic data results and suggest important considerations for ensuring good neural network generalization when training with synthetic data.
Alqahtani, S. S., Eghan, E. E., Rilling, J..  2017.  Recovering Semantic Traceability Links between APIs and Security Vulnerabilities: An Ontological Modeling Approach. 2017 IEEE International Conference on Software Testing, Verification and Validation (ICST). :80–91.

Over the last decade, a globalization of the software industry took place, which facilitated the sharing and reuse of code across existing project boundaries. At the same time, such global reuse also introduces new challenges to the software engineering community, with not only components but also their problems and vulnerabilities being now shared. For example, vulnerabilities found in APIs no longer affect only individual projects but instead might spread across projects and even global software ecosystem borders. Tracing these vulnerabilities at a global scale becomes an inherently difficult task since many of the existing resources required for such analysis still rely on proprietary knowledge representation. In this research, we introduce an ontology-based knowledge modeling approach that can eliminate such information silos. More specifically, we focus on linking security knowledge with other software knowledge to improve traceability and trust in software products (APIs). Our approach takes advantage of the Semantic Web and its reasoning services, to trace and assess the impact of security vulnerabilities across project boundaries. We present a case study, to illustrate the applicability and flexibility of our ontological modeling approach by tracing vulnerabilities across project and resource boundaries.

2017-12-12
Zhang, M., Chen, Q., Zhang, Y., Liu, X., Dong, S..  2017.  Requirement analysis and descriptive specification for exploratory evaluation of information system security protection capability. 2017 IEEE 2nd Advanced Information Technology, Electronic and Automation Control Conference (IAEAC). :1874–1878.

Exploratory evaluation is an effective way to analyze and improve the security of information system. The information system structure model for security protection capability is set up in view of the exploratory evaluation requirements of security protection capability, and the requirements of agility, traceability and interpretation for exploratory evaluation are obtained by analyzing the relationship between information system, protective equipment and protection policy. Aimed at the exploratory evaluation description problem of security protection capability, the exploratory evaluation problem and exploratory evaluation process are described based on the Granular Computing theory, and a general mathematical description is established. Analysis shows that the standardized description established meets the exploratory evaluation requirements, and it can provide an analysis basis and description specification for exploratory evaluation of information system security protection capability.

Suh, Y. K., Ma, J..  2017.  SuperMan: A Novel System for Storing and Retrieving Scientific-Simulation Provenance for Efficient Job Executions on Computing Clusters. 2017 IEEE 2nd International Workshops on Foundations and Applications of Self* Systems (FAS*W). :283–288.

Compute-intensive simulations typically charge substantial workloads on an online simulation platform backed by limited computing clusters and storage resources. Some (or most) of the simulations initiated by users may accompany input parameters/files that have been already provided by other (or same) users in the past. Unfortunately, these duplicate simulations may aggravate the performance of the platform by drastic consumption of the limited resources shared by a number of users on the platform. To minimize or avoid conducting repeated simulations, we present a novel system, called SUPERMAN (SimUlation ProvEnance Recycling MANager) that can record simulation provenances and recycle the results of past simulations. This system presents a great opportunity to not only reutilize existing results but also perform various analytics helpful for those who are not familiar with the platform. The system also offers interoperability across other systems by collecting the provenances in a standardized format. In our simulated experiments we found that over half of past computing jobs could be answered without actual executions by our system.

Davis, D. B., Featherston, J., Fukuda, M., Asuncion, H. U..  2017.  Data Provenance for Multi-Agent Models. 2017 IEEE 13th International Conference on e-Science (e-Science). :39–48.

Multi-agent simulations are useful for exploring collective patterns of individual behavior in social, biological, economic, network, and physical systems. However, there is no provenance support for multi-agent models (MAMs) in a distributed setting. To this end, we introduce ProvMASS, a novel approach to capture provenance of MAMs in a distributed memory by combining inter-process identification, lightweight coordination of in-memory provenance storage, and adaptive provenance capture. ProvMASS is built on top of the Multi-Agent Spatial Simulation (MASS) library, a framework that combines multi-agent systems with large-scale fine-grained agent-based models, or MAMs. Unlike other environments supporting MAMs, MASS parallelizes simulations with distributed memory, where agents and spatial data are shared application resources. We evaluate our approach with provenance queries to support three use cases and performance measures. Initial results indicate that our approach can support various provenance queries for MAMs at reasonable performance overhead.

Santos, E. E., Santos, E., Korah, J., Thompson, J. E., Murugappan, V., Subramanian, S., Zhao, Yan.  2017.  Modeling insider threat types in cyber organizations. 2017 IEEE International Symposium on Technologies for Homeland Security (HST). :1–7.

Insider threats can cause immense damage to organizations of different types, including government, corporate, and non-profit organizations. Being an insider, however, does not necessarily equate to being a threat. Effectively identifying valid threats, and assessing the type of threat an insider presents, remain difficult challenges. In this work, we propose a novel breakdown of eight insider threat types, identified by using three insider traits: predictability, susceptibility, and awareness. In addition to presenting this framework for insider threat types, we implement a computational model to demonstrate the viability of our framework with synthetic scenarios devised after reviewing real world insider threat case studies. The results yield useful insights into how further investigation might proceed to reveal how best to gauge predictability, susceptibility, and awareness, and precisely how they relate to the eight insider types.

Hariri, S., Tunc, C., Badr, Y..  2017.  Resilient Dynamic Data Driven Application Systems as a Service (rDaaS): A Design Overview. 2017 IEEE 2nd International Workshops on Foundations and Applications of Self* Systems (FAS*W). :352–356.

To overcome the current cybersecurity challenges of protecting our cyberspace and applications, we present an innovative cloud-based architecture to offer resilient Dynamic Data Driven Application Systems (DDDAS) as a cloud service that we refer to as resilient DDDAS as a Service (rDaaS). This architecture integrates Service Oriented Architecture (SOA) and DDDAS paradigms to offer the next generation of resilient and agile DDDAS-based cyber applications, particularly convenient for critical applications such as Battle and Crisis Management applications. Using the cloud infrastructure to offer resilient DDDAS routines and applications, large scale DDDAS applications can be developed by users from anywhere and by using any device (mobile or stationary) with the Internet connectivity. The rDaaS provides transformative capabilities to achieve superior situation awareness (i.e., assessment, visualization, and understanding), mission planning and execution, and resilient operations.