Biblio

Found 19604 results

2018-08-23
Blenn, Norbert, Ghiëtte, Vincent, Doerr, Christian.  2017.  Quantifying the Spectrum of Denial-of-Service Attacks Through Internet Backscatter. Proceedings of the 12th International Conference on Availability, Reliability and Security. :21:1–21:10.
Denial of Service (DoS) attacks are a major threat currently observable in computer networks and especially the Internet. In such an attack a malicious party tries to either break a service, running on a server, or exhaust the capacity or bandwidth of the victim to hinder customers to effectively use the service. Recent reports show that the total number of Distributed Denial of Service (DDoS) attacks is steadily growing with "mega-attacks" peaking at hundreds of gigabit/s (Gbps). In this paper, we will provide a quantification of DDoS attacks in size and duration beyond these outliers reported in the media. We find that these mega attacks do exist, but the bulk of attacks is in practice only a fraction of these frequently reported values. We further show that it is feasible to collect meaningful backscatter traces using surprisingly small telescopes, thereby enabling a broader audience to perform attack intelligence research.
2018-05-02
Youssef, Ayman, Shosha, Ahmed F..  2017.  Quantitave Dynamic Taint Analysis of Privacy Leakage in Android Arabic Apps. Proceedings of the 12th International Conference on Availability, Reliability and Security. :58:1–58:9.
Android smartphones are ubiquitous all over the world, and organizations that turn profits out of data mining user personal information are on the rise. Many users are not aware of the risks of accepting permissions from Android apps, and the continued state of insecurity, manifested in increased level of breaches across all large organizations means that personal information is falling in the hands of malicious actors. This paper aims at shedding the light on privacy leakage in apps that target a specific demography, Arabs. The research takes into consideration apps that cater to specific cultural aspects of this region and identify how they could be abusing the trust given to them by unsuspecting users. Dynamic taint analysis is used in a virtualized environment to analyze top free apps based on popularity in Google Play store. Information presented highlights how different categories of apps leak different categories of private information.
2018-08-23
Zhang, Kai, Liu, Chuanren, Zhang, Jie, Xiong, Hui, Xing, Eric, Ye, Jieping.  2017.  Randomization or Condensation?: Linear-Cost Matrix Sketching Via Cascaded Compression Sampling Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. :615–623.
Matrix sketching is aimed at finding compact representations of a matrix while simultaneously preserving most of its properties, which is a fundamental building block in modern scientific computing. Randomized algorithms represent state-of-the-art and have attracted huge interest from the fields of machine learning, data mining, and theoretic computer science. However, it still requires the use of the entire input matrix in producing desired factorizations, which can be a major computational and memory bottleneck in truly large problems. In this paper, we uncover an interesting theoretic connection between matrix low-rank decomposition and lossy signal compression, based on which a cascaded compression sampling framework is devised to approximate an m-by-n matrix in only O(m+n) time and space. Indeed, the proposed method accesses only a small number of matrix rows and columns, which significantly improves the memory footprint. Meanwhile, by sequentially teaming two rounds of approximation procedures and upgrading the sampling strategy from a uniform probability to more sophisticated, encoding-orientated sampling, significant algorithmic boosting is achieved to uncover more granular structures in the data. Empirical results on a wide spectrum of real-world, large-scale matrices show that by taking only linear time and space, the accuracy of our method rivals those state-of-the-art randomized algorithms consuming a quadratic, O(mn), amount of resources.
2018-09-05
Wang, J., Shi, D., Li, Y., Chen, J., Duan, X..  2017.  Realistic measurement protection schemes against false data injection attacks on state estimators. 2017 IEEE Power Energy Society General Meeting. :1–5.
False data injection attacks (FDIA) on state estimators are a kind of imminent cyber-physical security issue. Fortunately, it has been proved that if a set of measurements is strategically selected and protected, no FDIA will remain undetectable. In this paper, the metric Return on Investment (ROI) is introduced to evaluate the overall returns of the alternative measurement protection schemes (MPS). By setting maximum total ROI as the optimization objective, the previously ignored cost-benefit issue is taken into account to derive a realistic MPS for power utilities. The optimization problem is transformed into the Steiner tree problem in graph theory, where a tree pruning based algorithm is used to reduce the computational complexity and find a quasi-optimal solution with acceptable approximations. The correctness and efficiency of the algorithm are verified by case studies.
2018-06-07
WANG, YING, Li, Huawei, Li, Xiaowei.  2017.  Real-Time Meets Approximate Computing: An Elastic CNN Inference Accelerator with Adaptive Trade-off Between QoS and QoR. Proceedings of the 54th Annual Design Automation Conference 2017. :33:1–33:6.
Due to the recent progress in deep learning and neural acceleration architectures, specialized deep neural network or convolutional neural network (CNNs) accelerators are expected to provide an energy-efficient solution for real-time vision/speech processing. recognition and a wide spectrum of approximate computing applications. In addition to their wide applicability scope, we also found that the fascinating feature of deterministic performance and high energy-efficiency, makes such deep learning (DL) accelerators ideal candidates as application-processor IPs in embedded SoCs concerned with real-time processing. However, unlike traditional accelerator designs, DL accelerators introduce a new aspect of design trade-off between real-time processing (QoS) and computation approximation (QoR) into embedded systems. This work proposes an elastic CNN acceleration architecture that automatically adapts to the hard QoS constraint by exploiting the error-resilience in typical approximate computing workloads For the first time, the proposed design, including network tuning-and-mapping software and reconfigurable accelerator hardware, aims to reconcile the design constraint of QoS and Quality of Result (QoR). which are respectively the key concerns in real-time and approximate computing. It is shown in experiments that the proposed architecture enables the embedded system to work flexibly in an expanded operating space, significantly enhances its real-time ability. and maximizes the energy-efficiency of system within the user-specified QoS-QoR constraint through self-reconfiguration.
Tymchuk, Yuriy, Ghafari, Mohammad, Nierstrasz, Oscar.  2017.  Renraku: The One Static Analysis Model to Rule Them All. Proceedings of the 12th Edition of the International Workshop on Smalltalk Technologies. :13:1–13:10.
Most static analyzers are monolithic applications that define their own ways to analyze source code and present the results. Therefore aggregating multiple static analyzers into a single tool or integrating a new analyzer into existing tools requires a significant amount of effort. Over the last few years, we cultivated Renraku — a static analysis model that acts as a mediator between the static analyzers and the tools that present the reports. When used by both analysis and tool developers, this single quality model can reduce the cost to both introduce a new type of analysis to existing tools and create a tool that relies on existing analyzers.
2020-07-20
Marakis, Evangelos, van Harten, Wouter, Uppu, Ravitej, Vos, Willem L., Pinkse, Pepijn W. H..  2017.  Reproducibility of artificial multiple scattering media. 2017 Conference on Lasers and Electro-Optics Europe European Quantum Electronics Conference (CLEO/Europe-EQEC). :1–1.
Summary form only given. Authentication of people or objects using physical keys is insecure against secret duplication. Physical unclonable functions (PUF) are special physical keys that are assumed to be unclonable due to the large number of degrees of freedom in their manufacturing [1]. Opaque scattering media, such as white paint and teeth, comprise of millions of nanoparticles in a random arrangement. Under coherent light illumination, the multiple scattering from these nanoparticles gives rise to a complex interference resulting in a speckle pattern. The speckle pattern is seemingly random but highly sensitive to the exact position and size of the nanoparticles in the given piece of opaque scattering medium [2], thereby realizing an ideal optical PUF. These optical PUFs enabled applications such as quantum-secure authentication (QSA) and communication [3, 4].
2018-11-19
Xiaohe, Cao, Liuping, Feng, Peng, Cao, Jianhua, Hu, Jianle, Zhu.  2017.  Research on Anti-Counterfeiting Technology of Print Image Based on the Metameric Properties. Proceedings of the 2017 2Nd International Conference on Communication and Information Systems. :284–289.
High-precision scanners, copiers and other equipment to copy the image compared with the original, you can achieve a very realistic effect. There is a certain threat to the copyright of the manuscript. In view of this phenomenon, a design method of metameric security images with anti-counterfeiting and anti-copy function is presented on this paper. Metameric security images are designed and printed by the theory of metameric color and the four-color ink spectral characteristics. The realization of anti-counterfeiting function is based on the difference of K ink content in proportion of CMYK. In the metameric security images, trademark image display for the CMYK color, and visible under the sunlight. Anti-counterfeiting images appear as monochrome K ink, and visible under the infrared. The experimental results show that the metameric security images with infrared detection device and its characteristics under the infrared light source are observed the clear hidden information. It realizes the anti-counterfeiting function. The method can be applied to various industries in the trademark image security.
2018-01-23
Joo, Moon-Ho, Yoon, Sang-Pil, Kim, Sahng-Yoon, Kwon, Hun-Yeong.  2017.  Research on Distribution of Responsibility for De-Identification Policy of Personal Information. Proceedings of the 18th Annual International Conference on Digital Government Research. :74–83.
With the coming of the age of big data, efforts to institutionalize de-identification of personal information to protect privacy but also at the same time, to allow the use of personal information, have been actively carried out and already, many countries are in the stage of implementing and establishing de-identification policies quite actively. But even with such efforts to protect and use personal information at the same time, the danger posed by re-identification based on de-identified information is real enough to warrant serious consideration for a management mechanism of such risks as well as a mechanism for distributing the responsibilities and liabilities that follow these risks in the event of accidents and incidents involving the invasion of privacy. So far, most countries implementing the de-identification policies are focusing on defining what de-identification is and the exemption requirements to allow free use of de-identified personal information; in fact, it seems that there is a lack of discussion and consideration on how to distribute the responsibility of the risks and liabilities involved in the process of de-identification of personal information. This study proposes to take a look at the various de-identification policies worldwide and contemplate on these policies in the perspective of risk-liability theory. Also, the constituencies of the de-identification policies will be identified in order to analyze the roles and responsibilities of each of these constituencies thereby providing the theoretical basis on which to initiate the discussions on the distribution of burden and responsibilities arising from the de-identification policies.
2018-03-05
Shu, F., Li, M., Chen, S., Wang, X., Li, F..  2017.  Research on Network Security Protection System Based on Dynamic Modeling. 2017 IEEE 2nd Information Technology, Networking, Electronic and Automation Control Conference (ITNEC). :1602–1605.
A dynamic modeling method for network security vulnerabilities which is composed of the design of safety evaluation model, the design of risk model of intrusion event and the design of vulnerability risk model. The model based on identification of vulnerabilities values through dynamic forms can improve the tightness between vulnerability scanning system, intrusion prevention system and security configuration verification system. Based on this model, the network protection system which is most suitable for users can be formed, and the protection capability of the network protection system can be improved.
Shu, F., Li, M., Chen, S., Wang, X., Li, F..  2017.  Research on Network Security Protection System Based on Dynamic Modeling. 2017 IEEE 2nd Information Technology, Networking, Electronic and Automation Control Conference (ITNEC). :1602–1605.
A dynamic modeling method for network security vulnerabilities which is composed of the design of safety evaluation model, the design of risk model of intrusion event and the design of vulnerability risk model. The model based on identification of vulnerabilities values through dynamic forms can improve the tightness between vulnerability scanning system, intrusion prevention system and security configuration verification system. Based on this model, the network protection system which is most suitable for users can be formed, and the protection capability of the network protection system can be improved.
2018-05-16
Yang, Fan, Chien, Andrew A., Gunawi, Haryadi S..  2017.  Resilient Cloud in Dynamic Resource Environments. Proceedings of the 2017 Symposium on Cloud Computing. :627–627.
Traditional cloud stacks are designed to tolerate random, small-scale failures, and can successfully deliver highly-available cloud services and interactive services to end users. However, they fail to survive large-scale disruptions that are caused by major power outage, cyber-attack, or region/zone failures. Such changes trigger cascading failures and significant service outages. We propose to understand the reasons for these failures, and create reliable data services that can efficiently and robustly tolerate such large-scale resource changes. We believe cloud services will need to survive frequent, large dynamic resource changes in the future to be highly available. (1) Significant new challenges to cloud reliability are emerging, including cyber-attacks, power/network outages, and so on. For example, human error disrupted Amazon S3 service on 02/28/17 [2]. Recently hackers are even attacking electric utilities, which may lead to more outages [3, 6]. (2) Increased attention on resource cost optimization will increase usage dynamism, such as Amazon Spot Instances [1]. (3) Availability focused cloud applications will increasingly practice continuous testing to ensure they have no hidden source of catastrophic failure. For example, Netflix Simian Army can simulate the outages of individual servers, and even an entire AWS region [4]. (4) Cloud applications with dynamic flexibility will reap numerous benefits, such as flexible deployments, managing cost arbitrage and reliability arbitrage across cloud provides and datacenters, etc. Using Apache Cassandra [5] as the model system, we characterize its failure behavior under dynamic datacenter-scale resource changes. Each datacenter is volatile and randomly shut down with a given duty factor. We simulate read-only workload on a quorum-based system deployed across multiple datacenters, varying (1) system scale, (2) the fraction of volatile datacenters, and (3) the duty factor of volatile datacenters. We explore the space of various configurations, including replication factors and consistency levels, and measure the service availability (% of succeeded requests) and replication overhead (number of total replicas). Our results show that, in a volatile resource environment, the current replication and quorum protocols in Cassandra-like systems cannot high availability and consistency with low replication overhead. Our contributions include: (1) Detailed characterization of failures under dynamic datacenter-scale resource changes, showing that the exiting protocols in quorum-based systems cannot achieve high availability and consistency with low replication cost. (2) Study of the best achieve-able availability of data service in dynamic datacenter-scale resource environment.
2018-09-12
Park, Sangdon, Weimer, James, Lee, Insup.  2017.  Resilient Linear Classification: An Approach to Deal with Attacks on Training Data. Proceedings of the 8th International Conference on Cyber-Physical Systems. :155–164.
Data-driven techniques are used in cyber-physical systems (CPS) for controlling autonomous vehicles, handling demand responses for energy management, and modeling human physiology for medical devices. These data-driven techniques extract models from training data, where their performance is often analyzed with respect to random errors in the training data. However, if the training data is maliciously altered by attackers, the effect of these attacks on the learning algorithms underpinning data-driven CPS have yet to be considered. In this paper, we analyze the resilience of classification algorithms to training data attacks. Specifically, a generic metric is proposed that is tailored to measure resilience of classification algorithms with respect to worst-case tampering of the training data. Using the metric, we show that traditional linear classification algorithms are resilient under restricted conditions. To overcome these limitations, we propose a linear classification algorithm with a majority constraint and prove that it is strictly more resilient than the traditional algorithms. Evaluations on both synthetic data and a real-world retrospective arrhythmia medical case-study show that the traditional algorithms are vulnerable to tampered training data, whereas the proposed algorithm is more resilient (as measured by worst-case tampering).
2017-12-20
Ren, H., Jiang, F., Wang, H..  2017.  Resource allocation based on clustering algorithm for hybrid device-to-device networks. 2017 9th International Conference on Wireless Communications and Signal Processing (WCSP). :1–6.
In order to improve the spectrum utilization rate of Device-to-Device (D2D) communication, we study the hybrid resource allocation problem, which allows both the resource reuse and resource dedicated mode to work simultaneously. Meanwhile, multiple D2D devices are permitted to share uplink cellular resources with some designated cellular user equipment (CUE). Combined with the transmission requirement of different users, the optimized resource allocation problem is built which is a NP-hard problem. A heuristic greedy throughput maximization (HGTM) based on clustering algorithm is then proposed to solve the above problem. Numerical results demonstrate that the proposed HGTM outperforms existing algorithms in the sum throughput, CUEs SINR performance and the number of accessed D2D deceives.
2018-12-10
Maas, Martin, Asanović, Krste, Kubiatowicz, John.  2017.  Return of the Runtimes: Rethinking the Language Runtime System for the Cloud 3.0 Era. Proceedings of the 16th Workshop on Hot Topics in Operating Systems. :138–143.
The public cloud is moving to a Platform-as-a-Service model where services such as data management, machine learning or image classification are provided by the cloud operator while applications are written in high-level languages and leverage these services. Managed languages such as Java, Python or Scala are widely used in this setting. However, while these languages can increase productivity, they are often associated with problems such as unpredictable garbage collection pauses or warm-up overheads. We argue that the reason for these problems is that current language runtime systems were not initially designed for the cloud setting. To address this, we propose seven tenets for designing future language runtime systems for cloud data centers. We then outline the design of a general substrate for building such runtime systems, based on these seven tenets.
2017-12-20
Rogowski, R., Morton, M., Li, F., Monrose, F., Snow, K. Z., Polychronakis, M..  2017.  Revisiting Browser Security in the Modern Era: New Data-Only Attacks and Defenses. 2017 IEEE European Symposium on Security and Privacy (EuroS P). :366–381.
The continuous discovery of exploitable vulnerabilitiesin popular applications (e.g., web browsers and documentviewers), along with their heightening protections against control flow hijacking, has opened the door to an oftenneglected attack strategy-namely, data-only attacks. In thispaper, we demonstrate the practicality of the threat posedby data-only attacks that harness the power of memorydisclosure vulnerabilities. To do so, we introduce memorycartography, a technique that simplifies the construction ofdata-only attacks in a reliable manner. Specifically, we showhow an adversary can use a provided memory mapping primitive to navigate through process memory at runtime, andsafely reach security-critical data that can then be modifiedat will. We demonstrate this capability by using our cross-platform memory cartography framework implementation toconstruct data-only exploits against Internet Explorer and Chrome. The outcome of these exploits ranges from simple HTTP cookie leakage, to the alteration of the same originpolicy for targeted domains, which enables the cross-originexecution of arbitrary script code. The ease with which we can undermine the security ofmodern browsers stems from the fact that although isolationpolicies (such as the same origin policy) are enforced atthe script level, these policies are not well reflected in theunderlying sandbox process models used for compartmentalization. This gap exists because the complex demands oftoday's web functionality make the goal of enforcing thesame origin policy through process isolation a difficult oneto realize in practice, especially when backward compatibility is a priority (e.g., for support of cross-origin IFRAMEs). While fixing the underlying problems likely requires a majorrefactoring of the security architecture of modern browsers(in the long term), we explore several defenses, includingglobal variable randomization, that can limit the power ofthe attacks presented herein.
2018-09-28
Rizomiliotis, Panagiotis, Molla, Eirini, Gritzalis, Stefanos.  2017.  REX: A Searchable Symmetric Encryption Scheme Supporting Range Queries. Proceedings of the 2017 on Cloud Computing Security Workshop. :29–37.
Searchable Symmetric Encryption is a mechanism that facilitates search over encrypted data that are outsourced to an untrusted server. SSE schemes are practical as they trade nicely security for efficiency. However, the supported functionalities are mainly limited to single keyword queries. In this paper, we present a new efficient SSE scheme, called REX, that supports range queries. REX is a no interactive (single round) and response-hiding scheme. It has optimal communication and search computation complexity, while it is much more secure than traditional Order Preserving Encryption based range SSE schemes.
2018-07-06
Liu, Chang, Li, Bo, Vorobeychik, Yevgeniy, Oprea, Alina.  2017.  Robust Linear Regression Against Training Data Poisoning. Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. :91–102.
The effectiveness of supervised learning techniques has made them ubiquitous in research and practice. In high-dimensional settings, supervised learning commonly relies on dimensionality reduction to improve performance and identify the most important factors in predicting outcomes. However, the economic importance of learning has made it a natural target for adversarial manipulation of training data, which we term poisoning attacks. Prior approaches to dealing with robust supervised learning rely on strong assumptions about the nature of the feature matrix, such as feature independence and sub-Gaussian noise with low variance. We propose an integrated method for robust regression that relaxes these assumptions, assuming only that the feature matrix can be well approximated by a low-rank matrix. Our techniques integrate improved robust low-rank matrix approximation and robust principle component regression, and yield strong performance guarantees. Moreover, we experimentally show that our methods significantly outperform state of the art both in running time and prediction error.
2018-06-20
Aslanyan, H., Avetisyan, A., Arutunian, M., Keropyan, G., Kurmangaleev, S., Vardanyan, V..  2017.  Scalable Framework for Accurate Binary Code Comparison. 2017 Ivannikov ISPRAS Open Conference (ISPRAS). :34–38.
Comparison of two binary files has many practical applications: the ability to detect programmatic changes between two versions, the ability to find old versions of statically linked libraries to prevent the use of well-known bugs, malware analysis, etc. In this article, a framework for comparison of binary files is presented. Framework uses IdaPro [1] disassembler and Binnavi [2] platform to recover structure of the target program and represent it as a call graph (CG). A program dependence graph (PDG) corresponds to each vertex of the CG. The proposed comparison algorithm consists of two main stages. At the first stage, several heuristics are applied to find the exact matches. Two functions are matched if at least one of the calculated heuristics is the same and unique in both binaries. At the second stage, backward and forward slicing is applied on matched vertices of CG to find further matches. According to empiric results heuristic method is effective and has high matching quality for unchanged or slightly modified functions. As a contradiction, to match heavily modified functions, binary code clone detection is used and it is based on finding maximum common subgraph for pair of PDGs. To achieve high performance on extensive binaries, the whole matching process is parallelized. The framework is tested on the number of real world libraries, such as python, openssh, openssl, libxml2, rsync, php, etc. Results show that in most cases more than 95% functions are truly matched. The tool is scalable due to parallelization of functions matching process and generation of PDGs and CGs.
Hassen, Mehadi, Chan, Philip K..  2017.  Scalable Function Call Graph-based Malware Classification. Proceedings of the Seventh ACM on Conference on Data and Application Security and Privacy. :239–248.
In an attempt to preserve the structural information in malware binaries during feature extraction, function call graph-based features have been used in various research works in malware classification. However, the approach usually employed when performing classification on these graphs, is based on computing graph similarity using computationally intensive techniques. Due to this, much of the previous work in this area incurred large performance overhead and does not scale well. In this paper, we propose a linear time function call graph (FCG) vector representation based on function clustering that has significant performance gains in addition to improved classification accuracy. We also show how this representation can enable using graph features together with other non-graph features.
2018-06-11
Fujiwara, Yasuhiro, Marumo, Naoki, Blondel, Mathieu, Takeuchi, Koh, Kim, Hideaki, Iwata, Tomoharu, Ueda, Naonori.  2017.  Scaling Locally Linear Embedding. Proceedings of the 2017 ACM International Conference on Management of Data. :1479–1492.
Locally Linear Embedding (LLE) is a popular approach to dimensionality reduction as it can effectively represent nonlinear structures of high-dimensional data. For dimensionality reduction, it computes a nearest neighbor graph from a given dataset where edge weights are obtained by applying the Lagrange multiplier method, and it then computes eigenvectors of the LLE kernel where the edge weights are used to obtain the kernel. Although LLE is used in many applications, its computation cost is significantly high. This is because, in obtaining edge weights, its computation cost is cubic in the number of edges to each data point. In addition, the computation cost in obtaining the eigenvectors of the LLE kernel is cubic in the number of data points. Our approach, Ripple, is based on two ideas: (1) it incrementally updates the edge weights by exploiting the Woodbury formula and (2) it efficiently computes eigenvectors of the LLE kernel by exploiting the LU decomposition-based inverse power method. Experiments show that Ripple is significantly faster than the original approach of LLE by guaranteeing the same results of dimensionality reduction.
2017-10-27
Aron Laszka, Waseem Abbas, Xenofon Koutsoukos.  2017.  Scheduling Battery-Powered Sensor Networks for Minimizing Detection Delays. IEEE Communication Letters.
Sensor networks monitoring spatially-distributed physical systems often comprise battery-powered sensor devices. To extend lifetime, battery power may be conserved using sleep scheduling: activating and deactivating some of the sensors from time to time. Scheduling sensors with the goal of maximizing average coverage, that is the average fraction of time for which each monitoring target is covered by some active sensor has been studied extensively. However, many applications also require time-critical monitoring in the sense that one has to minimize the average delay until an unpredictable change or event at a monitoring target is detected. In this paper, we study the problem of sleep scheduling sensors to minimize the average delay in detecting such time-critical events in the context of monitoring physical systems that can be modeled using graphs, such as water distribution networks. We provide a game-theoretic solution that computes schedules with near optimal average delays. We illustrate that schedules that optimize average coverage may result in large average detection delays, whereas schedules minimizing average detection delays using our proposed scheme also result in near optimal average coverage.
Waseem Abbas, Aron Laszka, Yevgeniy Vorobeychik, Xenofon Koutsoukos.  2017.  Scheduling Resource-Bounded Monitoring Devices for Event Detection and Isolation in Networks. IEEE Transactions on Network Science and Engineering.
In networked systems, monitoring devices such as sensors are typically deployed to monitor various target locations. Targets are the points in the physical space at which events of some interest, such as random faults or attacks, can occur. Most often, these devices have limited energy supplies, and they can operate for a limited duration. As a result, energyefficient monitoring of various target locations through a set of monitoring devices with limited energy supplies is a crucial problem in networked systems. In this paper, we study optimal scheduling of monitoring devices to maximize network coverage for detecting and isolating events on targets for a given network lifetime. The monitoring devices considered could remain active only for a fraction of the overall network lifetime. We formulate the problem of scheduling of monitoring devices as a graph labeling problem, which unlike other existing solutions, allows us to directly utilize the underlying network structure to explore the trade-off between coverage and network lifetime. In this direction, first we propose a greedy heuristic to solve the graph labeling problem, and then provide a game-theoretic solution to achieve optimal graph labeling. Moreover, the proposed setup can be used to simultaneously solve the scheduling and placement of monitoring devices, which yields improved performance as compared to separately solving the placement and scheduling problems. Finally, we illustrate our results on various networks, including real-world water distribution networks.
2018-09-28
van Oorschot, Paul C..  2017.  Science, Security and Academic Literature: Can We Learn from History? Proceedings of the 2017 Workshop on Moving Target Defense. :1–2.
A recent paper (Oakland 2017) discussed science and security research in the context of the government-funded Science of Security movement, and the history and prospects of security as a scientific pursuit. It drew on literature from within the security research community, and mature history and philosophy of science literature. The paper sparked debate in numerous organizations and the security community. Here we consider some of the main ideas, provide a summary list of relevant literature, and encourage discussion within the Moving Target Defense (MTD) sub-community1.
2018-01-10
Bönsch, Andrea, Trisnadi, Robert, Wendt, Jonathan, Vierjahn, Tom, Kuhlen, Torsten W..  2017.  Score-based Recommendation for Efficiently Selecting Individual Virtual Agents in Multi-agent Systems. Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology. :74:1–74:2.
Controlling user-agent-interactions by means of an external operator includes selecting the virtual interaction partners fast and faultlessly. However, especially in immersive scenes with a large number of potential partners, this task is non-trivial. Thus, we present a score-based recommendation system supporting an operator in the selection task. Agents are recommended as potential partners based on two parameters: the user's distance to the agents and the user's gazing direction. An additional graphical user interface (GUI) provides elements for configuring the system and for applying actions to those agents which the operator has confirmed as interaction partners.