Biblio
Filters: Author is Hale, Matthew [Clear All Filters]
Phish Finders: Crowd-powered RE for anti-phishing training tools. 2022 IEEE 30th International Requirements Engineering Conference Workshops (REW). :130–135.
.
2022. Many organizations use internal phishing campaigns to gauge awareness and coordinate training efforts based on those findings. Ongoing content design is important for phishing training tools due to the influence recency has on phishing susceptibility. Traditional approaches for content development require significant investment and can be prohibitively costly, especially during the requirements engineering phase of software development and for applications that are constantly evolving. While prior research primarily depends upon already known phishing cues curated by experts, our project, Phish Finders, uses crowdsourcing to explore phishing cues through the unique perspectives and thought processes of everyday users in a realistic yet safe online environment, Zooniverse. This paper contributes qualitative analysis of crowdsourced comments that identifies novel cues, such as formatting and typography, which were identified by the crowd as potential phishing indicators. The paper also shows that crowdsourcing may have the potential to scale as a requirements engineering approach to meet the needs of content labeling for improved training tool development.
ISSN: 2770-6834
Edge Differential Privacy for Algebraic Connectivity of Graphs. 2021 60th IEEE Conference on Decision and Control (CDC). :2764—2769.
.
2021. Graphs are the dominant formalism for modeling multi-agent systems. The algebraic connectivity of a graph is particularly important because it provides the convergence rates of consensus algorithms that underlie many multi-agent control and optimization techniques. However, sharing the value of algebraic connectivity can inadvertently reveal sensitive information about the topology of a graph, such as connections in social networks. Therefore, in this work we present a method to release a graph’s algebraic connectivity under a graph-theoretic form of differential privacy, called edge differential privacy. Edge differential privacy obfuscates differences among graphs’ edge sets and thus conceals the absence or presence of sensitive connections therein. We provide privacy with bounded Laplace noise, which improves accuracy relative to conventional unbounded noise. The private algebraic connectivity values are analytically shown to provide accurate estimates of consensus convergence rates, as well as accurate bounds on the diameter of a graph and the mean distance between its nodes. Simulation results confirm the utility of private algebraic connectivity in these contexts.
Privacy-Preserving Policy Synthesis in Markov Decision Processes. 2020 59th IEEE Conference on Decision and Control (CDC). :6266—6271.
.
2020. In decision-making problems, the actions of an agent may reveal sensitive information that drives its decisions. For instance, a corporation's investment decisions may reveal its sensitive knowledge about market dynamics. To prevent this type of information leakage, we introduce a policy synthesis algorithm that protects the privacy of the transition probabilities in a Markov decision process. We use differential privacy as the mathematical definition of privacy. The algorithm first perturbs the transition probabilities using a mechanism that provides differential privacy. Then, based on the privatized transition probabilities, we synthesize a policy using dynamic programming. Our main contribution is to bound the "cost of privacy," i.e., the difference between the expected total rewards with privacy and the expected total rewards without privacy. We also show that computing the cost of privacy has time complexity that is polynomial in the parameters of the problem. Moreover, we establish that the cost of privacy increases with the strength of differential privacy protections, and we quantify this increase. Finally, numerical experiments on two example environments validate the established relationship between the cost of privacy and the strength of data privacy protections.
Error Bounds and Guidelines for Privacy Calibration in Differentially Private Kalman Filtering. 2020 American Control Conference (ACC). :4423—4428.
.
2020. Differential privacy has emerged as a formal framework for protecting sensitive information in control systems. One key feature is that it is immune to post-processing, which means that arbitrary post-hoc computations can be performed on privatized data without weakening differential privacy. It is therefore common to filter private data streams. To characterize this setup, in this paper we present error and entropy bounds for Kalman filtering differentially private state trajectories. We consider systems in which an output trajectory is privatized in order to protect the state trajectory that produced it. We provide bounds on a priori and a posteriori error and differential entropy of a Kalman filter which is processing the privatized output trajectories. Using the error bounds we develop, we then provide guidelines to calibrate privacy levels in order to keep filter error within pre-specified bounds. Simulation results are presented to demonstrate these developments.
Bidirectional Information Flow and the Roles of Privacy Masks in Cloud-Based Control. 2019 IEEE Information Theory Workshop (ITW). :1–5.
.
2019. We consider a cloud-based control architecture for a linear plant with Gaussian process noise, where the state of the plant contains a client's sensitive information. We assume that the cloud tries to estimate the state while executing a designated control algorithm. The mutual information between the client's actual state and the cloud's estimate is adopted as a measure of privacy loss. We discuss the necessity of uplink and downlink privacy masks. After observing that privacy is not necessarily a monotone function of the noise levels of privacy masks, we discuss the joint design procedure for uplink and downlink privacy masks. Finally, the trade-off between privacy and control performance is explored.
Privacy in Feedback: The Differentially Private LQG. 2018 Annual American Control Conference (ACC). :3386–3391.
.
2018. Information communicated within cyber-physical systems (CPSs) is often used in determining the physical states of such systems, and malicious adversaries may intercept these communications in order to infer future states of a CPS or its components. Accordingly, there arises a need to protect the state values of a system. Recently, the notion of differential privacy has been used to protect state trajectories in dynamical systems, and it is this notion of privacy that we use here to protect the state trajectories of CPSs. We incorporate a cloud computer to coordinate the agents comprising the CPSs of interest, and the cloud offers the ability to remotely coordinate many agents, rapidly perform computations, and broadcast the results, making it a natural fit for systems with many interacting agents or components. Striving for broad applicability, we solve infinite-horizon linear-quadratic-regulator (LQR) problems, and each agent protects its own state trajectory by adding noise to its states before they are sent to the cloud. The cloud then uses these state values to generate optimal inputs for the agents. As a result, private data are fed into feedback loops at each iteration, and each noisy term affects every future state of every agent. In this paper, we show that the differentially private LQR problem can be related to the well-studied linear-quadratic-Gaussian (LQG) problem, and we provide bounds on how agents' privacy requirements affect the cloud's ability to generate optimal feedback control values for the agents. These results are illustrated in numerical simulations.