Biblio
Guaranteeing a certain level of user privacy in an arbitrary piece of text is a challenging issue. However, with this challenge comes the potential of unlocking access to vast data stores for training machine learning models and supporting data driven decisions. We address this problem through the lens of dx-privacy, a generalization of Differential Privacy to non Hamming distance metrics. In this work, we explore word representations in Hyperbolic space as a means of preserving privacy in text. We provide a proof satisfying dx-privacy, then we define a probability distribution in Hyperbolic space and describe a way to sample from it in high dimensions. Privacy is provided by perturbing vector representations of words in high dimensional Hyperbolic space to obtain a semantic generalization. We conduct a series of experiments to demonstrate the tradeoff between privacy and utility. Our privacy experiments illustrate protections against an authorship attribution algorithm while our utility experiments highlight the minimal impact of our perturbations on several downstream machine learning models. Compared to the Euclidean baseline, we observe \textbackslashtextgreater 20x greater guarantees on expected privacy against comparable worst case statistics.
A distinguisher is employed by an adversary to explore the privacy property of a cryptographic primitive. If a cryptographic primitive is said to be private, there is no distinguisher algorithm that can be used by an adversary to distinguish the encodings generated by this primitive with non-negligible advantage. Recently, two privacy-preserving matrix transformations first proposed by Salinas et al. have been widely used to achieve the matrix-related verifiable (outsourced) computation in data protection. Salinas et al. proved that these transformations are private (in terms of indistinguishability). In this paper, we first propose the concept of a linear distinguisher and two constructions of the linear distinguisher algorithms. Then, we take those two matrix transformations (including Salinas et al.\$'\$s original work and Yu et al.\$'\$s modification) as example targets and analyze their privacy property when our linear distinguisher algorithms are employed by the adversaries. The results show that those transformations are not private even against passive eavesdropping.
The RFID technology has attracted considerable attention in recent years, and brings convenience to supply chain management. In this paper, we concentrate on designing path-checking protocols to check the valid paths in supply chains. By entering a valid path, the check reader can distinguish whether the tags have gone through the path or not. Based on modified schnorr signature scheme, we provide a path-checking method to achieve multi-signatures and final verification. In the end, we conduct security and privacy analysis to the scheme.
Privacy analysis is essential in the society. Data privacy preservation for access control, guaranteed service in wireless sensor networks are important parts. In programs' verification, we not only consider about these kinds of safety and liveness properties but some security policies like noninterference, and observational determinism which have been proposed as hyper properties. Fairness is widely applied in verification for concurrent systems, wireless sensor networks and embedded systems. This paper studies verification and analysis for proving security-relevant properties and hyper properties by proposing deductive proof rules under fairness requirements (constraints).