Visible to the public Biblio

Found 675 results

Filters: First Letter Of Last Name is O  [Clear All Filters]
2019-02-13
Orosz, P., Nagy, B., Varga, P., Gusat, M..  2018.  Low False Alarm Ratio DDoS Detection for ms-scale Threat Mitigation. 2018 14th International Conference on Network and Service Management (CNSM). :212–218.

The dynamically changing landscape of DDoS threats increases the demand for advanced security solutions. The rise of massive IoT botnets enables attackers to mount high-intensity short-duration ”volatile ephemeral” attack waves in quick succession. Therefore the standard human-in-the-loop security center paradigm is becoming obsolete. To battle the new breed of volatile DDoS threats, the intrusion detection system (IDS) needs to improve markedly, at least in reaction times and in automated response (mitigation). Designing such an IDS is a daunting task as network operators are traditionally reluctant to act - at any speed - on potentially false alarms. The primary challenge of a low reaction time detection system is maintaining a consistently low false alarm rate. This paper aims to show how a practical FPGA-based DDoS detection and mitigation system can successfully address this. Besides verifying the model and algorithms with real traffic ”in the wild”, we validate the low false alarm ratio. Accordingly, we describe a methodology for determining the false alarm ratio for each involved threat type, then we categorize the causes of false detection, and provide our measurement results. As shown here, our methods can effectively mitigate the volatile ephemeral DDoS attacks, and accordingly are usable both in human out-of-loop and on-the-loop next-generation security solutions.

Sykosch, Arnold, Ohm, Marc, Meier, Michael.  2018.  Hunting Observable Objects for Indication of Compromise. Proceedings of the 13th International Conference on Availability, Reliability and Security. :59:1–59:8.
Shared Threat Intelligence is often imperfect. Especially so called Indicator of Compromise might not be well constructed. This might either be the case if the threat only appeared recently and recordings do not allow for construction of high quality Indicators or the threat is only observed by sharing partners lesser capable to model the threat. However, intrusion detection based on imperfect intelligence yields low quality results. Within this paper we illustrate how one is able to overcome these shortcomings in data quality and is able to achieve solid intrusion detection. This is done by assigning individual weights to observables listed in a STIX™ report to express their significance for detection. For evaluation, an automatized toolchain was developed to mimic the Threat Intelligence sharing ecosystem from initial detection over reporting, sharing, and determining compromise by STIX™-formated data. Multiple strategies to detect and attribute a specific threat are compared using this data, leading up to an approach yielding a F1-Score of 0.79.
2019-02-08
Colnago, Jessica, Devlin, Summer, Oates, Maggie, Swoopes, Chelse, Bauer, Lujo, Cranor, Lorrie, Christin, Nicolas.  2018.  "It's Not Actually That Horrible'': Exploring Adoption of Two-Factor Authentication at a University. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. :456:1-456:11.

Despite the additional protection it affords, two-factor authentication (2FA) adoption reportedly remains low. To better understand 2FA adoption and its barriers, we observed the deployment of a 2FA system at Carnegie Mellon University (CMU). We explore user behaviors and opinions around adoption, surrounding a mandatory adoption deadline. Our results show that (a) 2FA adopters found it annoying, but fairly easy to use, and believed it made their accounts more secure; (b) experience with CMU Duo often led to positive perceptions, sometimes translating into 2FA adoption for other accounts; and, (c) the differences between users required to adopt 2FA and those who adopted voluntarily are smaller than expected. We also explore the relationship between different usage patterns and perceived usability, and identify user misconceptions, insecure practices, and design issues. We conclude with recommendations for large-scale 2FA deployments to maximize adoption, focusing on implementation design, use of adoption mandates, and strategic messaging.

Olegario, Cielito C., Coronel, Andrei D., Medina, Ruji P., Gerardo, Bobby D..  2018.  A Hybrid Approach Towards Improved Artificial Neural Network Training for Short-Term Load Forecasting. Proceedings of the 2018 International Conference on Data Science and Information Technology. :53-58.

The power of artificial neural networks to form predictive models for phenomenon that exhibit non-linear relationships is a given fact. Despite this advantage, artificial neural networks are known to suffer drawbacks such as long training times and computational intensity. The researchers propose a two-tiered approach to enhance the learning performance of artificial neural networks for phenomenon with time series where data exhibits predictable changes that occur every calendar year. This paper focuses on the initial results of the first phase of the proposed algorithm which incorporates clustering and classification prior to application of the backpropagation algorithm. The 2016–2017 zonal load data of France is used as the data set. K-means is chosen as the clustering algorithm and a comparison is made between Naïve Bayes and k-Nearest Neighbors to determine the better classifier for this data set. The initial results show that electrical load behavior is not necessarily reflective of calendar clustering even without using the min-max temperature recorded during the inclusive months. Simulating the day-type classification process using one cluster, initial results show that the k-nearest neighbors outperforms the Naïve Bayes classifier for this data set and that the best feature to be used for classification into day type is the daily min-max load. These classified load data is expected to reduce training time and improve the overall performance of short-term load demand predictive models in a future paper.

2019-01-31
Seetanadi, Gautham Nayak, Oliveira, Luis, Almeida, Luis, Arzén, Karl-Erik, Maggio, Martina.  2018.  Game-Theoretic Network Bandwidth Distribution for Self-Adaptive Cameras. SIGBED Rev.. 15:31–36.

Devices sharing a network compete for bandwidth, being able to transmit only a limited amount of data. This is for example the case with a network of cameras, that should record and transmit video streams to a monitor node for video surveillance. Adaptive cameras can reduce the quality of their video, thereby increasing the frame compression, to limit network congestion. In this paper, we exploit our experience with computing capacity allocation to design and implement a network bandwidth allocation strategy based on game theory, that accommodates multiple adaptive streams with convergence guarantees. We conduct some experiments with our implementation and discuss the results, together with some conclusions and future challenges.

Ouyang, Deqiang, Shao, Jie, Zhang, Yonghui, Yang, Yang, Shen, Heng Tao.  2018.  Video-Based Person Re-Identification via Self-Paced Learning and Deep Reinforcement Learning Framework. Proceedings of the 26th ACM International Conference on Multimedia. :1562–1570.

Person re-identification is an important task in video surveillance, focusing on finding the same person across different cameras. However, most existing methods of video-based person re-identification still have some limitations (e.g., the lack of effective deep learning framework, the robustness of the model, and the same treatment for all video frames) which make them unable to achieve better recognition performance. In this paper, we propose a novel self-paced learning algorithm for video-based person re-identification, which could gradually learn from simple to complex samples for a mature and stable model. Self-paced learning is employed to enhance video-based person re-identification based on deep neural network, so that deep neural network and self-paced learning are unified into one frame. Then, based on the trained self-paced learning, we propose to employ deep reinforcement learning to discard misleading and confounding frames and find the most representative frames from video pairs. With the advantage of deep reinforcement learning, our method can learn strategies to select the optimal frame groups. Experiments show that the proposed framework outperforms the existing methods on the iLIDS-VID, PRID-2011 and MARS datasets.

2019-01-21
Nicho, M., Oluwasegun, A., Kamoun, F..  2018.  Identifying Vulnerabilities in APT Attacks: A Simulated Approach. 2018 9th IFIP International Conference on New Technologies, Mobility and Security (NTMS). :1–4.

This research aims to identify some vulnerabilities of advanced persistent threat (APT) attacks using multiple simulated attacks in a virtualized environment. Our experimental study shows that while updating the antivirus software and the operating system with the latest patches may help in mitigating APTs, APT threat vectors could still infiltrate the strongest defenses. Accordingly, we highlight some critical areas of security concern that need to be addressed.

Cho, S., Han, I., Jeong, H., Kim, J., Koo, S., Oh, H., Park, M..  2018.  Cyber Kill Chain based Threat Taxonomy and its Application on Cyber Common Operational Picture. 2018 International Conference On Cyber Situational Awareness, Data Analytics And Assessment (Cyber SA). :1–8.

Over a decade, intelligent and persistent forms of cyber threats have been damaging to the organizations' cyber assets and missions. In this paper, we analyze current cyber kill chain models that explain the adversarial behavior to perform advanced persistent threat (APT) attacks, and propose a cyber kill chain model that can be used in view of cyber situation awareness. Based on the proposed cyber kill chain model, we propose a threat taxonomy that classifies attack tactics and techniques for each attack phase using CAPEC, ATT&CK that classify the attack tactics, techniques, and procedures (TTPs) proposed by MITRE. We also implement a cyber common operational picture (CyCOP) to recognize the situation of cyberspace. The threat situation can be represented on the CyCOP by applying cyber kill chain based threat taxonomy.

Ahmed, Chuadhry Mujeeb, Ochoa, Martin, Zhou, Jianying, Mathur, Aditya P., Qadeer, Rizwan, Murguia, Carlos, Ruths, Justin.  2018.  NoisePrint: Attack Detection Using Sensor and Process Noise Fingerprint in Cyber Physical Systems. Proceedings of the 2018 on Asia Conference on Computer and Communications Security. :483–497.

An attack detection scheme is proposed to detect data integrity attacks on sensors in Cyber-Physical Systems (CPSs). A combined fingerprint for sensor and process noise is created during the normal operation of the system. Under sensor spoofing attack, noise pattern deviates from the fingerprinted pattern enabling the proposed scheme to detect attacks. To extract the noise (difference between expected and observed value) a representative model of the system is derived. A Kalman filter is used for the purpose of state estimation. By subtracting the state estimates from the real system states, a residual vector is obtained. It is shown that in steady state the residual vector is a function of process and sensor noise. A set of time domain and frequency domain features is extracted from the residual vector. Feature set is provided to a machine learning algorithm to identify the sensor and process. Experiments are performed on two testbeds, a real-world water treatment (SWaT) facility and a water distribution (WADI) testbed. A class of zero-alarm attacks, designed for statistical detectors on SWaT are detected by the proposed scheme. It is shown that a multitude of sensors can be uniquely identified with accuracy higher than 90% based on the noise fingerprint.

Cho, S., Chen, G., Chun, H., Coon, J. P., O'Brien, D..  2018.  Impact of multipath reflections on secrecy in VLC systems with randomly located eavesdroppers. 2018 IEEE Wireless Communications and Networking Conference (WCNC). :1–6.
Considering reflected light in physical layer security (PLS) is very important because a small portion of reflected light enables an eavesdropper (ED) to acquire legitimate information. Moreover, it would be a practical strategy for an ED to be located at an outer area of the room, where the reflection light is strong, in order to escape the vigilance of a legitimate user. Therefore, in this paper, we investigate the impact of multipath reflections on PLS in visible light communication in the presence of randomly located eavesdroppers. We apply spatial point processes to characterize randomly distributed EDs. The generalized error in signal-to-noise ratio that occurs when reflections are ignored is defined as a function of the distance between the receiver and the wall. We use this error for quantifying the domain of interest that needs to be considered from the secrecy viewpoint. Furthermore, we investigate how the reflection affects the secrecy outage probability (SOP). It is shown that the effect of the reflection on the SOP can be removed by adjusting the light emitting diode configuration. Monte Carlo simulations and numerical results are given to verify our analysis.
2019-01-16
Kimmich, J. M., Schlesinger, A., Tschaikner, M., Ochmann, M., Frank, S..  2018.  Acoustical Analysis of Coupled Rooms Applied to the Deutsche Oper Berlin. 2018 Joint Conference - Acoustics. :1–9.
The aim of the project SIMOPERA is to simulate and optimize the acoustics in large and complex rooms, with special focus on the Deutsche Oper Berlin as an example of application. Firstly, characteristic subspaces of the opera are considered such as the orchestra pit, the stage and the auditorium. Special attention is paid to the orchestra pit, where high sound pressure levels can occur, leading to noise related risks for the musicians. However, lowering the sound pressure level in the orchestra pit should not violate other objectives as the propagation of sound into the auditorium, the balance between the stage performers and the orchestra across the hall, and the mutual audibility between performers and orchestra members. For that reason, a hybrid simulation method consisting of the wave-based Finite Element Method (FEM) and the Boundary Element Method (BEM) for low frequencies and geometrical methods like the mirror source method and ray tracing for higher frequencies is developed in order to determine the relevant room acoustic quantities such as impulse response functions, reverberation time, clarity, center time etc. Measurements in the opera will continuously accompany the numerical calculations. Finally, selected constructive means for reducing the sound level in the orchestra pit will be analyzed.
2018-12-10
Oyekanlu, E..  2018.  Distributed Osmotic Computing Approach to Implementation of Explainable Predictive Deep Learning at Industrial IoT Network Edges with Real-Time Adaptive Wavelet Graphs. 2018 IEEE First International Conference on Artificial Intelligence and Knowledge Engineering (AIKE). :179–188.
Challenges associated with developing analytics solutions at the edge of large scale Industrial Internet of Things (IIoT) networks close to where data is being generated in most cases involves developing analytics solutions from ground up. However, this approach increases IoT development costs and system complexities, delay time to market, and ultimately lowers competitive advantages associated with delivering next-generation IoT designs. To overcome these challenges, existing, widely available, hardware can be utilized to successfully participate in distributed edge computing for IIoT systems. In this paper, an osmotic computing approach is used to illustrate how distributed osmotic computing and existing low-cost hardware may be utilized to solve complex, compute-intensive Explainable Artificial Intelligence (XAI) deep learning problem from the edge, through the fog, to the network cloud layer of IIoT systems. At the edge layer, the C28x digital signal processor (DSP), an existing low-cost, embedded, real-time DSP that has very wide deployment and integration in several IoT industries is used as a case study for constructing real-time graph-based Coiflet wavelets that could be used for several analytic applications including deep learning pre-processing applications at the edge and fog layers of IIoT networks. Our implementation is the first known application of the fixed-point C28x DSP to construct Coiflet wavelets. Coiflet Wavelets are constructed in the form of an osmotic microservice, using embedded low-level machine language to program the C28x at the network edge. With the graph-based approach, it is shown that an entire Coiflet wavelet distribution could be generated from only one wavelet stored in the C28x based edge device, and this could lead to significant savings in memory at the edge of IoT networks. Pearson correlation coefficient is used to select an edge generated Coiflet wavelet and the selected wavelet is used at the fog layer for pre-processing and denoising IIoT data to improve data quality for fog layer based deep learning application. Parameters for implementing deep learning at the fog layer using LSTM networks have been determined in the cloud. For XAI, communication network noise is shown to have significant impact on results of predictive deep learning at IIoT network fog layer.
Ndichu, S., Ozawa, S., Misu, T., Okada, K..  2018.  A Machine Learning Approach to Malicious JavaScript Detection using Fixed Length Vector Representation. 2018 International Joint Conference on Neural Networks (IJCNN). :1–8.

To add more functionality and enhance usability of web applications, JavaScript (JS) is frequently used. Even with many advantages and usefulness of JS, an annoying fact is that many recent cyberattacks such as drive-by-download attacks exploit vulnerability of JS codes. In general, malicious JS codes are not easy to detect, because they sneakily exploit vulnerabilities of browsers and plugin software, and attack visitors of a web site unknowingly. To protect users from such threads, the development of an accurate detection system for malicious JS is soliciting. Conventional approaches often employ signature and heuristic-based methods, which are prone to suffer from zero-day attacks, i.e., causing many false negatives and/or false positives. For this problem, this paper adopts a machine-learning approach to feature learning called Doc2Vec, which is a neural network model that can learn context information of texts. The extracted features are given to a classifier model (e.g., SVM and neural networks) and it judges the maliciousness of a JS code. In the performance evaluation, we use the D3M Dataset (Drive-by-Download Data by Marionette) for malicious JS codes and JSUPACK for benign ones for both training and test purposes. We then compare the performance to other feature learning methods. Our experimental results show that the proposed Doc2Vec features provide better accuracy and fast classification in malicious JS code detection compared to conventional approaches.

2018-12-03
Cozzolino, Vittorio, Ding, Aaron Yi, Ott, Jorg, Kutscher, Dirk.  2017.  Enabling Fine-Grained Edge Offloading for IoT. Proceedings of the SIGCOMM Posters and Demos. :124–126.

In this paper we make the case for IoT edge offloading, which strives to exploit the resources on edge computing devices by offloading fine-grained computation tasks from the cloud closer to the users and data generators (i.e., IoT devices). The key motive is to enhance performance, security and privacy for IoT services. Our proposal bridges the gap between cloud computing and IoT by applying a divide and conquer approach over the multi-level (cloud, edge and IoT) information pipeline. To validate the design of IoT edge offloading, we developed a unikernel-based prototype and evaluated the system under various hardware and network conditions. Our experimentation has shown promising results and revealed the limitation of existing IoT hardware and virtualization platforms, shedding light on future research of edge computing and IoT.

Ogasawara, Junya, Kono, Kenji.  2017.  Nioh: Hardening The Hypervisor by Filtering Illegal I/O Requests to Virtual Devices. Proceedings of the 33rd Annual Computer Security Applications Conference. :542–552.
Vulnerabilities in hypervisors are crucial in multi-tenant clouds since they can undermine the security of all virtual machines (VMs) consolidated on a vulnerable hypervisor. Unfortunately, 107 vulnerabilitiesin KVM+QEMU and 38 vulnerabilities in Xen have been reported in 2016. The device-emulation layer in hypervisors is a hotbed of vulnerabilities because the code for virtualizing devices is complicated and requires knowledge on the device internals. We propose a "device request filter", called Nioh, that raises the bar for attackers to exploit the vulnerabilities in hypervisors. The key insight behind Nioh is that malicious I/O requests attempt to exploit vulnerabilities and violate device specifications in many cases. Nioh inspects I/O requests from VMs and rejects those that do not conform to a device specification. A device specification is modeled as a device automaton in Nioh, an extended automaton to facilitate the description of device specifications. The software framework is also provided to encapsulate the interactions between the device request filter and the underlying hypervisors. The results of our attack evaluation suggests that Nioh can defend against attacks that exploit vulnerabilities in device emulation, i.e., CVE-2015-5158, CVE-2016-1568, CVE-2016-4439, and CVE-2016-7909. This paper shows that the notorious VENOM attack can be detected and rejected by using Nioh.
Molka-Danielsen, J., Engelseth, P., Olešnaníková, V., Šarafín, P., Žalman, R..  2017.  Big Data Analytics for Air Quality Monitoring at a Logistics Shipping Base via Autonomous Wireless Sensor Network Technologies. 2017 5th International Conference on Enterprise Systems (ES). :38–45.
The indoor air quality in industrial workplace buildings, e.g. air temperature, humidity and levels of carbon dioxide (CO2), play a critical role in the perceived levels of workers' comfort and in reported medical health. CO2 can act as an oxygen displacer, and in confined spaces humans can have, for example, reactions of dizziness, increased heart rate and blood pressure, headaches, and in more serious cases loss of consciousness. Specialized organizations can be brought in to monitor the work environment for limited periods. However, new low cost wireless sensor network (WSN) technologies offer potential for more continuous and autonomous assessment of industrial workplace air quality. Central to effective decision making is the data analytics approach and visualization of what is potentially, big data (BD) in monitoring the air quality in industrial workplaces. This paper presents a case study that monitors air quality that is collected with WSN technologies. We discuss the potential BD problems. The case trials are from two workshops that are part of a large on-shore logistics base a regional shipping industry in Norway. This small case study demonstrates a monitoring and visualization approach for facilitating BD in decision making for health and safety in the shipping industry. We also identify other potential applications of WSN technologies and visualization of BD in the workplace environments; for example, for monitoring of other substances for worker safety in high risk industries and for quality of goods in supply chain management.
Larsson, A., Ibrahim, O., Olsson, L., Laere, J. van.  2017.  Agent based simulation of a payment system for resilience assessments. 2017 IEEE International Conference on Industrial Engineering and Engineering Management (IEEM). :314–318.

We provide an agent based simulation model of the Swedish payment system. The simulation model is to be used to analyze the consequences of loss of functionality, or disruptions of the payment system for the food and fuel supply chains as well as the bank sector. We propose a gaming simulation approach, using a computer based role playing game, to explore the collaborative responses from the key actors, in order to evoke and facilitate collective resilience.

2018-11-28
Porcheron, Martin, Fischer, Joel E., McGregor, Moira, Brown, Barry, Luger, Ewa, Candello, Heloisa, O'Hara, Kenton.  2017.  Talking with Conversational Agents in Collaborative Action. Companion of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing. :431–436.

This one-day workshop intends to bring together both academics and industry practitioners to explore collaborative challenges in speech interaction. Recent improvements in speech recognition and computing power has led to conversational interfaces being introduced to many of the devices we use every day, such as smartphones, watches, and even televisions. These interfaces allow us to get things done, often by just speaking commands, relying on a reasonably well understood single-user model. While research on speech recognition is well established, the social implications of these interfaces remain underexplored, such as how we socialise, work, and play around such technologies, and how these might be better designed to support collaborative collocated talk-in-action. Moreover, the advent of new products such as the Amazon Echo and Google Home, which are positioned as supporting multi-user interaction in collocated environments such as the home, makes exploring the social and collaborative challenges around these products, a timely topic. In the workshop, we will review current practices and reflect upon prior work on studying talk-in-action and collocated interaction. We wish to begin a dialogue that takes on the renewed interest in research on spoken interaction with devices, grounded in the existing practices of the CSCW community.

2018-11-19
Suzuki, Ippei, Ochiai, Yoichi.  2017.  Unphotogenic Light: High-Speed Projection Method to Prevent Secret Photography by Small Cameras. ACM SIGGRAPH 2017 Posters. :65:1–65:2.
We present a new method to protect projected content from secret photography using high-speed projection. Protection techniques for digital copies have been discussed over many years from the viewpoint of data protection. However, content displayed by general display techniques is not only visible to the human eye but also can be captured by cameras. Therefore, projected content is, at times, secretly taken by malicious small cameras even when protection techniques for digital copies are adopted. In this study, we aim to realize a protectable projection method that allows people to observe content with their eyes but not record content with camera devices.
Otoum, S., Kantarci, B., Mouftah, H. T..  2017.  Hierarchical Trust-Based Black-Hole Detection in WSN-Based Smart Grid Monitoring. 2017 IEEE International Conference on Communications (ICC). :1–6.

Wireless Sensor Networks (WSNs) have been widely adopted to monitor various ambient conditions including critical infrastructures. Since power grid is considered as a critical infrastructure, and the smart grid has appeared as a viable technology to introduce more reliability, efficiency, controllability, and safety to the traditional power grid, WSNs have been envisioned as potential tools to monitor the smart grid. The motivation behind smart grid monitoring is to improve its emergency preparedness and resilience. Despite their effectiveness in monitoring critical infrastructures, WSNs also introduce various security vulnerabilities due to their open nature and unreliable wireless links. In this paper, we focus on the, Black-Hole (B-H) attack. To cope with this, we propose a hierarchical trust-based WSN monitoring model for the smart grid equipment in order to detect the B-H attacks. Malicious nodes have been detected by testing the trade-off between trust and dropped packet ratios for each Cluster Head (CH). We select different thresholds for the Packets Dropped Ratio (PDR) in order to test the network behaviour with them. We set four different thresholds (20%, 30%, 40%, and 50%). Threshold of 50% has been shown to reach the system stability in early periods with the least number of re-clustering operations.

Wang, X., Oxholm, G., Zhang, D., Wang, Y..  2017.  Multimodal Transfer: A Hierarchical Deep Convolutional Neural Network for Fast Artistic Style Transfer. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). :7178–7186.

Transferring artistic styles onto everyday photographs has become an extremely popular task in both academia and industry. Recently, offline training has replaced online iterative optimization, enabling nearly real-time stylization. When those stylization networks are applied directly to high-resolution images, however, the style of localized regions often appears less similar to the desired artistic style. This is because the transfer process fails to capture small, intricate textures and maintain correct texture scales of the artworks. Here we propose a multimodal convolutional neural network that takes into consideration faithful representations of both color and luminance channels, and performs stylization hierarchically with multiple losses of increasing scales. Compared to state-of-the-art networks, our network can also perform style transfer in nearly real-time by performing much more sophisticated training offline. By properly handling style and texture cues at multiple scales using several modalities, we can transfer not just large-scale, obvious style cues but also subtle, exquisite ones. That is, our scheme can generate results that are visually pleasing and more similar to multiple desired artistic styles with color and texture cues at multiple scales.

Grinstein, E., Duong, N. Q. K., Ozerov, A., Pérez, P..  2018.  Audio Style Transfer. 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :586–590.

``Style transfer'' among images has recently emerged as a very active research topic, fuelled by the power of convolution neural networks (CNNs), and has become fast a very popular technology in social media. This paper investigates the analogous problem in the audio domain: How to transfer the style of a reference audio signal to a target audio content? We propose a flexible framework for the task, which uses a sound texture model to extract statistics characterizing the reference audio style, followed by an optimization-based audio texture synthesis to modify the target content. In contrast to mainstream optimization-based visual transfer method, the proposed process is initialized by the target content instead of random noise and the optimized loss is only about texture, not structure. These differences proved key for audio style transfer in our experiments. In order to extract features of interest, we investigate different architectures, whether pre-trained on other tasks, as done in image style transfer, or engineered based on the human auditory system. Experimental results on different types of audio signal confirm the potential of the proposed approach.

Picek, Stjepan, Hemberg, Erik, O'Reilly, Una-May.  2017.  If You Can'T Measure It, You Can'T Improve It: Moving Target Defense Metrics. Proceedings of the 2017 Workshop on Moving Target Defense. :115–118.
We propose new metrics drawing inspiration from the optimization domain that can be used to characterize the effectiveness of moving target defenses better. Besides that, we propose a Network Neighborhood Partitioning algorithm that can help to measure the influence of MTDs more precisely. The techniques proposed here are generic and could be combined with existing metrics. The obtained results demonstrate how additional information about the effectiveness of defenses can be obtained as well as how network neighborhood partitioning helps to improve the granularity of metrics.
Picek, Stjepan, Hemberg, Erik, O'Reilly, Una-May.  2017.  If You Can'T Measure It, You Can'T Improve It: Moving Target Defense Metrics. Proceedings of the 2017 Workshop on Moving Target Defense. :115–118.
We propose new metrics drawing inspiration from the optimization domain that can be used to characterize the effectiveness of moving target defenses better. Besides that, we propose a Network Neighborhood Partitioning algorithm that can help to measure the influence of MTDs more precisely. The techniques proposed here are generic and could be combined with existing metrics. The obtained results demonstrate how additional information about the effectiveness of defenses can be obtained as well as how network neighborhood partitioning helps to improve the granularity of metrics.
Garcia, Dennis, Lugo, Anthony Erb, Hemberg, Erik, O'Reilly, Una-May.  2017.  Investigating Coevolutionary Archive Based Genetic Algorithms on Cyber Defense Networks. Proceedings of the Genetic and Evolutionary Computation Conference Companion. :1455–1462.
We introduce a new cybersecurity project named RIVALS. RIVALS will assist in developing network defense strategies through modeling adversarial network attack and defense dynamics. RIVALS will focus on peer-to-peer networks and use coevolutionary algorithms. In this contribution, we describe RIVALS' current suite of coevolutionary algorithms that use archiving to maintain progressive exploration and that support different solution concepts as fitness metrics. We compare and contrast their effectiveness by executing a standard coevolutionary benchmark (Compare-on-one) and RIVALS simulations on 3 different network topologies. Currently, we model denial of service (DOS) attack strategies by the attacker selecting one or more network servers to disable for some duration. Defenders can choose one of three different network routing protocols: shortest path, flooding and a peer-to-peer ring overlay to try to maintain their performance. Attack completion and resource cost minimization serve as attacker objectives. Mission completion and resource cost minimization are the reciprocal defender objectives. Our experiments show that existing algorithms either sacrifice execution speed or forgo the assurance of consistent results. rIPCA, our adaptation of a known coevolutionary algorithm named IPC A, is able to more consistently produce high quality results, albeit without IPCA's guarantees for results with monotonically increasing performance, without sacrificing speed.