Biblio
This paper considers a decentralized switched control problem where exact conditions for controller synthesis are obtained in the form of semidefinite programming (SDP). The formulation involves a discrete-time switched linear plant that has a nested structure, and whose system matrices switch between a finite number of values according to finite-state automation. The goal of this paper is to synthesize a commensurately nested switched controller to achieve a desired level of 2-induced norm performance. The nested structures of both plant and controller are characterized by block lower-triangular system matrices. For this setup, exact conditions are provided for the existence of a finite path-dependent synthesis. These include conditions for the completion of scaling matrices obtained through an extended matrix completion lemma.When individual controller dimensions are chosen at least as large as the plant, these conditions reduce to a set of linear matrix inequalities. The completion lemma also provides an algorithm to complete closed-loop scaling matrices, leading to inequalities for ontroller synthesis that are solvable either algebraically or numerically through SDP.
Published in IEEE Transactions on Control of Network Systems, volume 2, issue 4, December 2015.
Most Depth Image Based Rendering (DIBR) techniques produce synthesized images which contain non-uniform geometric distortions affecting edges coherency. This type of distortions are challenging for common image quality metrics. Morphological filters maintain important geometric information such as edges across different resolution levels. There is inherent congruence between the morphological pyramid decomposition scheme and human visual perception. In this paper, multi-scale measure, morphological pyramid peak signal-to-noise ratio MP-PSNR, based on morphological pyramid decomposition is proposed for the evaluation of DIBR synthesized images. It is shown that MPPSNR achieves much higher correlation with human judgment compared to the state-of-the-art image quality measures in this context.
Many current VM monitoring approaches require guest OS modifications and are also unable to perform application level monitoring, reducing their value in a cloud setting. This paper introduces hprobes, a framework that allows one to dynamically monitor applications and operating systems inside a VM. The hprobe framework does not require any changes to the guest OS, which avoids the tight coupling of monitoring with its target. Furthermore, the monitors can be customized and enabled/disabled while the VM is running. To demonstrate the usefulness of this framework, we present three sample detectors: an emergency detector for a security vulnerability, an application watchdog, and an infinite-loop detector. We test our detectors on real applications and demonstrate that those detectors achieve an acceptable level of performance overhead with a high degree of flexibility.
Presented at the 11th European Dependable Computing Conference-Dependabiltiy in Practice (EDCC 2015), September 2015.
We proposed a multi-granularity approach to present risk information of mobile apps to the end users. Within this approach the highest level is a summary risk index, which allows quick and easy comparison among multiple apps that provide similar functionality. We have developed several types of risk index, such as text saying “High Risk” or number of filled circles (Gates, Chen, Li, & Proctor, 2014). Through both online and in-lab studies, we found that when presented the interface with the summary risk index, participants made more secure app-selection decisions. Subsequent research showed that framing of the summary risk information affects users’ app-selection decisions, and positive framing in terms of safety has an advantage over negative framing in terms of risk (Chen, Gates, Li, & Proctor, 2014).
In addition to the summary risk index, some users may also want more detailed risk information for the apps. We have been developing an intermediate-level risk display that presents only the major risk categories. As a first step, we conducted user studies to have expert users’ identify the major risk categories (personal privacy, monetary loss, and device stability) and validate the categories on typical users (Jorgensen, Chen, Gates, Li, Proctor, & Yu, 2015). In a subsequent study, we are developing a graphical display to incorporate these risk categories into the current app interface and test its effectiveness.
This multi-granularity approach can be applied to risk communication in other contexts. For example, in the context of communicating the potential risk associated with phishing attacks, an effective warning should be designed to include both higher-level and lower-level risk information: A higher-level index information about how likely an email message or website is a phishing one should be presented to users and inform them about the potential risk in an easy-to-comprehend manner; a more detailed explanation should also be available for users who want to know more about the warning and the index. We have completed a pilot study in this area and are initiating a full study to investigate the effectiveness of such an interface in preventing users from being phished successfully.
We have begun to investigate the effectiveness of a phishing warning Chrome extension in a field setting of everyday computer use. A preliminary experiment has been conducted in which participants installed and used the extension. They were required to fill out an online browsing behavior questionnaire by clicking on a survey link sent in a weekly email by us. Two phishing attacks were simulated during the study by directing participants to "fake" (phishing) survey sites we created. Almost all participants who saw the warnings on our fake sites input incorrect passwords, but follow-up interviews revealed that only one participant did so intentionally. A follow-up interview revealed that the warning failure was mainly due to the survey task being mandatory. Another finding of interest from the interview was that about 50% of the participants had never heard of phishing or did not understand its meaning.
In recent years, cyber security threats have become increasingly dangerous. Hackers have fabricated fake emails to spoof specific users into clicking on malicious attachments or URL links in them. This kind of threat is called a spear-phishing attack. Because spear-phishing attacks use unknown exploits to trigger malicious activities, it is difficult to effectively defend against them. Thus, this study focuses on the challenges faced, and we develop a Cloud-threat Inspection Appliance (CIA) system to defend against spear-phishing threats. With the advantages of hardware-assisted virtualization technology, we use the CIA to develop a transparent hypervisor monitor that conceals the presence of the detection engine in the hypervisor kernel. In addition, the CIA also designs a document pre-filtering algorithm to enhance system performance. By inspecting PDF format structures, the proposed CIA was able to filter 77% of PDF attachments and prevent them from all being sent into the hypervisor monitor for deeper analysis. Finally, we tested CIA in real-world scenarios. The hypervisor monitor was shown to be a better anti-evasion sandbox than commercial ones. During 2014, CIA inspected 780,000 mails in a company with 200 user accounts, and found 65 unknown samples that were not detected by commercial anti-virus software.
A3 is an execution management environment that aims to make network-facing applications and services resilient against zero-day attacks. A3 recently underwent two adversarial evaluations of its defensive capabilities. In one, A3 defended an App Store used in a Capture the Flag (CTF) tournament, and in the other, a tactically relevant network service in a red team exercise. This paper describes the A3 defensive technologies evaluated, the evaluation results, and the broader lessons learned about evaluations for technologies that seek to protect critical systems from zero-day attacks.
Biometric systems have been applied to improve the security of several computational systems. These systems analyse physiological or behavioural features obtained from the users in order to perform authentication. Biometric features should ideally meet a number of requirements, including permanence. In biometrics, permanence means that the analysed biometric feature will not change over time. However, recent studies have shown that this is not the case for several biometric modalities. Adaptive biometric systems deal with this issue by adapting the user model over time. Some algorithms for adaptive biometrics have been investigated and compared in the literature. In machine learning, several studies show that the combination of individual techniques in ensembles may lead to more accurate and stable decision models. This paper investigates the usage of some ensemble approaches to combine the output of current adaptive algorithms for biometrics. The experiments are carried out on keystroke dynamics, a biometric modality known to be subject to change over time.
Experience influences actions people take in protecting themselves against phishing. One way to measure experience is through mental models. Mental models are internal representations of a concept or system that develop with experience. By rating pairs of concepts on the strength of their relationship, networks can be created through Pathfinder, showing an in-depth analysis of how information is organized. Researchers had novice and expert computer users rate three sets of terms related to phishing. The terms were divided into three categories: prevention of phishing, trends and characteristics of phishing attacks, and the consequences of phishing. Results indicated that expert mental models were more complex with more links between concepts. Specifically, experts had sixteen, thirteen, and fifteen links in the networks describing the prevention, trends, and consequences of phishing, respectively; however, novices only had eleven, nine, and nine links in the networks describing prevention, trends, and consequences of phishing, respectively. These preliminary results provide quantifiable network displays of mental models of novices and experts that cannot be seen through interviews. This information could provide a basis for future research on how mental models could be used to determine phishing vulnerability and the effectiveness of phishing training.
Automated human facial image de-identification is a much needed technology for privacy-preserving social media and intelligent surveillance applications. Other than the usual face blurring techniques, in this work, we propose to achieve facial anonymity by slightly modifying existing facial images into "averaged faces" so that the corresponding identities are difficult to uncover. This approach preserves the aesthesis of the facial images while achieving the goal of privacy protection. In particular, we explore a deep learning-based facial identity-preserving (FIP) features. Unlike conventional face descriptors, the FIP features can significantly reduce intra-identity variances, while maintaining inter-identity distinctions. By suppressing and tinkering FIP features, we achieve the goal of k-anonymity facial image de-identification while preserving desired utilities. Using a face database, we successfully demonstrate that the resulting "averaged faces" will still preserve the aesthesis of the original images while defying facial image identity recognition.
Keystroke dynamics analysis has been applied successfully to password or fixed short texts verification as a means to reduce their inherent security limitations, because their length and the fact of being typed often makes their characteristic timings fairly stable. On the other hand, free text analysis has been neglected until recent years due to the inherent difficulties of dealing with short term behavioral noise and long term effects over the typing rhythm. In this paper we examine finite context modeling of keystroke dynamics in free text and report promising results for user verification over an extensive data set collected from a real world environment outside the laboratory setting that we make publicly available.
The communication infrastructure is a key element for management and control of the power system in the smart grid. The communication infrastructure, which can include equipment using off-the-shelf vulnerable operating systems, has the potential to increase the attack surface of the power system. The interdependency between the communication and the power system renders the management of the overall security risk a challenging task. In this paper, we address this issue by presenting a mathematical model for identifying and hardening the most critical communication equipment used in the power system. Using non-cooperative game theory, we model interactions between an attacker and a defender. We derive the minimum defense resources required and the optimal strategy of the defender that minimizes the risk on the power system. Finally, we evaluate the correctness and the efficiency of our model via a case study.
With the increasing popularity of wearable devices, information becomes much easily available. However, personal information sharing still poses great challenges because of privacy issues. We propose an idea of Visual Human Signature (VHS) which can represent each person uniquely even captured in different views/poses by wearable cameras. We evaluate the performance of multiple effective modalities for recognizing an identity, including facial appearance, visual patches, facial attributes and clothing attributes. We propose to emphasize significant dimensions and do weighted voting fusion for incorporating the modalities to improve the VHS recognition. By jointly considering multiple modalities, the VHS recognition rate can reach by 51% in frontal images and 48% in the more challenging environment and our approach can surpass the baseline with average fusion by 25% and 16%. We also introduce Multiview Celebrity Identity Dataset (MCID), a new dataset containing hundreds of identities with different view and clothing for comprehensive evaluation.
Workflows are complex operational processes that include security constraints restricting which users can perform which tasks. An improper user-task assignment may prevent the completion of the work- flow, and deciding such an assignment at runtime is known to be complex, especially when considering user unavailability (known as the resiliency problem). Therefore, design tools are required that allow fast evaluation of workflow resiliency. In this paper, we propose a methodology for work- flow designers to assess the impact of the security policy on computing the resiliency of a workflow. Our approach relies on encoding a work- flow into the probabilistic model-checker PRISM, allowing its resiliency to be evaluated by solving a Markov Decision Process. We observe and illustrate that adding or removing some constraints has a clear impact on the resiliency computation time, and we compute the set of security constraints that can be artificially added to a security policy in order to reduce the computation time while maintaining the resiliency.
There are relatively fewer studies on the security-check waiting lines for screening cargo containers using queueing models. In this paper, we address two important measures at a security-check system, which are concerning the security screening effectiveness and the efficiency. The goal of this paper is to provide a modelling framework to understand the economic trade-offs embedded in container-inspection decisions. In order to analyze the policy initiatives, we develop a stylized queueing model with the novel features pertaining to the security checkpoints.