Visible to the public Biblio

Filters: Keyword is fairness  [Clear All Filters]
2023-03-31
Kahla, Mostafa, Chen, Si, Just, Hoang Anh, Jia, Ruoxi.  2022.  Label-Only Model Inversion Attacks via Boundary Repulsion. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :15025–15033.
Recent studies show that the state-of-the-art deep neural networks are vulnerable to model inversion attacks, in which access to a model is abused to reconstruct private training data of any given target class. Existing attacks rely on having access to either the complete target model (whitebox) or the model's soft-labels (blackbox). However, no prior work has been done in the harder but more practical scenario, in which the attacker only has access to the model's predicted label, without a confidence measure. In this paper, we introduce an algorithm, Boundary-Repelling Model Inversion (BREP-MI), to invert private training data using only the target model's predicted labels. The key idea of our algorithm is to evaluate the model's predicted labels over a sphere and then estimate the direction to reach the target class's centroid. Using the example of face recognition, we show that the images reconstructed by BREP-MI successfully reproduce the semantics of the private training data for various datasets and target model architectures. We compare BREP-MI with the state-of-the-art white-box and blackbox model inversion attacks, and the results show that despite assuming less knowledge about the target model, BREP-MI outperforms the blackbox attack and achieves comparable results to the whitebox attack. Our code is available online.11https://github.com/m-kahla/Label-Only-Model-Inversion-Attacks-via-Boundary-Repulsion
2023-01-06
Golatkar, Aditya, Achille, Alessandro, Wang, Yu-Xiang, Roth, Aaron, Kearns, Michael, Soatto, Stefano.  2022.  Mixed Differential Privacy in Computer Vision. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :8366—8376.
We introduce AdaMix, an adaptive differentially private algorithm for training deep neural network classifiers using both private and public image data. While pre-training language models on large public datasets has enabled strong differential privacy (DP) guarantees with minor loss of accuracy, a similar practice yields punishing trade-offs in vision tasks. A few-shot or even zero-shot learning baseline that ignores private data can outperform fine-tuning on a large private dataset. AdaMix incorporates few-shot training, or cross-modal zero-shot learning, on public data prior to private fine-tuning, to improve the trade-off. AdaMix reduces the error increase from the non-private upper bound from the 167–311% of the baseline, on average across 6 datasets, to 68-92% depending on the desired privacy level selected by the user. AdaMix tackles the trade-off arising in visual classification, whereby the most privacy sensitive data, corresponding to isolated points in representation space, are also critical for high classification accuracy. In addition, AdaMix comes with strong theoretical privacy guarantees and convergence analysis.
2022-12-09
Fakhartousi, Amin, Meacham, Sofia, Phalp, Keith.  2022.  Autonomic Dominant Resource Fairness (A-DRF) in Cloud Computing. 2022 IEEE 46th Annual Computers, Software, and Applications Conference (COMPSAC). :1626—1631.
In the world of information technology and the Internet, which has become a part of human life today and is constantly expanding, Attention to the users' requirements such as information security, fast processing, dynamic and instant access, and costs savings has become essential. The solution that is proposed for such problems today is a technology that is called cloud computing. Today, cloud computing is considered one of the most essential distributed tools for processing and storing data on the Internet. With the increasing using this tool, the need to schedule tasks to make the best use of resources and respond appropriately to requests has received much attention, and in this regard, many efforts have been made and are being made. To this purpose, various algorithms have been proposed to calculate resource allocation, each of which has tried to solve equitable distribution challenges while using maximum resources. One of these calculation methods is the DRF algorithm. Although it offers a better approach than previous algorithms, it faces challenges, especially with time-consuming resource allocation computing. These challenges make the use of DRF more complex than ever in the low number of requests with high resource capacity as well as the high number of simultaneous requests. This study tried to reduce the computations costs associated with the DRF algorithm for resource allocation by introducing a new approach to using this DRF algorithm to automate calculations by machine learning and artificial intelligence algorithms (Autonomic Dominant Resource Fairness or A-DRF).
2021-11-08
Varshney, Kush R..  2020.  On Mismatched Detection and Safe, Trustworthy Machine Learning. 2020 54th Annual Conference on Information Sciences and Systems (CISS). :1–4.
Instilling trust in high-stakes applications of machine learning is becoming essential. Trust may be decomposed into four dimensions: basic accuracy, reliability, human interaction, and aligned purpose. The first two of these also constitute the properties of safe machine learning systems. The second dimension, reliability, is mainly concerned with being robust to epistemic uncertainty and model mismatch. It arises in the machine learning paradigms of distribution shift, data poisoning attacks, and algorithmic fairness. All of these problems can be abstractly modeled using the theory of mismatched hypothesis testing from statistical signal processing. By doing so, we can take advantage of performance characterizations in that literature to better understand the various machine learning issues.
2021-06-24
Habib ur Rehman, Muhammad, Mukhtar Dirir, Ahmed, Salah, Khaled, Svetinovic, Davor.  2020.  FairFed: Cross-Device Fair Federated Learning. 2020 IEEE Applied Imagery Pattern Recognition Workshop (AIPR). :1–7.
Federated learning (FL) is the rapidly developing machine learning technique that is used to perform collaborative model training over decentralized datasets. FL enables privacy-preserving model development whereby the datasets are scattered over a large set of data producers (i.e., devices and/or systems). These data producers train the learning models, encapsulate the model updates with differential privacy techniques, and share them to centralized systems for global aggregation. However, these centralized models are always prone to adversarial attacks (such as data-poisoning and model poisoning attacks) due to a large number of data producers. Hence, FL methods need to ensure fairness and high-quality model availability across all the participants in the underlying AI systems. In this paper, we propose a novel FL framework, called FairFed, to meet fairness and high-quality data requirements. The FairFed provides a fairness mechanism to detect adversaries across the devices and datasets in the FL network and reject their model updates. We use a Python-simulated FL framework to enable large-scale training over MNIST dataset. We simulate a cross-device model training settings to detect adversaries in the training network. We used TensorFlow Federated and Python to implement the fairness protocol, the deep neural network, and the outlier detection algorithm. We thoroughly test the proposed FairFed framework with random and uniform data distributions across the training network and compare our initial results with the baseline fairness scheme. Our proposed work shows promising results in terms of model accuracy and loss.
2020-12-21
Portaluri, G., Giordano, S..  2020.  Gambling on fairness: a fair scheduler for IIoT communications based on the shell game. 2020 IEEE 25th International Workshop on Computer Aided Modeling and Design of Communication Links and Networks (CAMAD). :1–6.
The Industrial Internet of Things (IIoT) paradigm represents nowadays the cornerstone of the industrial automation since it has introduced new features and services for different environments and has granted the connection of industrial machine sensors and actuators both to local processing and to the Internet. One of the most advanced network protocol stack for IoT-IIoT networks that have been developed is 6LoWPAN which supports IPv6 on top of Low-power Wireless Personal Area Networks (LoWPANs). 6LoWPAN is usually coupled with the IEEE 802.15.4 low-bitrate and low-energy MAC protocol that relies on the time-slotted channel hopping (TSCH) technique. In TSCH networks, a coordinator node synchronizes all end-devices and specifies whether (and when) they can transmit or not in order to improve their energy efficiency. In this scenario, the scheduling strategy adopted by the coordinator plays a crucial role that impacts dramatically on the network performance. In this paper, we present a novel scheduling strategy for time-slot allocation in IIoT communications which aims at the improvement of the overall network fairness. The proposed strategy mimics the well-known shell game turning the totally unfair mechanics of this game into a fair scheduling strategy. We compare our proposal with three allocation strategies, and we evaluate the fairness of each scheduler showing that our allocator outperforms the others.
2020-04-06
Berenjian, Samaneh, Hajizadeh, Saeed, Atani, Reza Ebrahimi.  2019.  An Incentive Security Model to Provide Fairness for Peer-to-Peer Networks. 2019 IEEE Conference on Application, Information and Network Security (AINS). :71–76.
Peer-to-Peer networks are designed to rely on the resources of their own users. Therefore, resource management plays an important role in P2P protocols. Early P2P networks did not use proper mechanisms to manage fairness. However, after seeing difficulties and rise of freeloaders in networks like Gnutella, the importance of providing fairness for users have become apparent. In this paper, we propose an incentive-based security model which leads to a network infrastructure that lightens the work of Seeders and makes Leechers to contribute more. This method is able to prevent betrayals in Leecher-to-Leecher transactions and helps Seeders to be treated more fairly. This is what other incentive methods such as Bittorrent are incapable of doing. Additionally, by getting help from cryptography and combining it with our method, it is also possible to achieve secure channels, immune to spying, next to a fair network. This is the first protocol designed for P2P networks which has separated Leechers and Seeders without the need to a central server. The simulation results clearly show how our proposed approach can overcome free-riding issue. In addition, our findings revealed that our approach is able to provide an appropriate level of fairness for the users and can decrease the download time.
2019-09-26
Dziembowski, Stefan, Eckey, Lisa, Faust, Sebastian.  2018.  FairSwap: How To Fairly Exchange Digital Goods. Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. :967-984.

We introduce FairSwap – an efficient protocol for fair exchange of digital goods using smart contracts. A fair exchange protocol allows a sender S to sell a digital commodity x for a fixed price p to a receiver R. The protocol is said to be secure if R only pays if he receives the correct x. Our solution guarantees fairness by relying on smart contracts executed over decentralized cryptocurrencies, where the contract takes the role of an external judge that completes the exchange in case of disagreement. While in the past there have been several proposals for building fair exchange protocols over cryptocurrencies, our solution has two distinctive features that makes it particular attractive when users deal with large commodities. These advantages are: (1) minimizing the cost for running the smart contract on the blockchain, and (2) avoiding expensive cryptographic tools such as zero-knowledge proofs. In addition to our new protocols, we provide formal security definitions for smart contract based fair exchange, and prove security of our construction. Finally, we illustrate several applications of our basic protocol and evaluate practicality of our approach via a prototype implementation for fairly selling large files over the cryptocurrency Ethereum. This article is summarized in: the morning paper an interesting/influential/important paper from the world of CS every weekday morning, as selected by Adrian Colyer

2018-05-02
Pass, Rafael, Shi, Elaine.  2017.  FruitChains: A Fair Blockchain. Proceedings of the ACM Symposium on Principles of Distributed Computing. :315–324.
Nakamoto's famous blockchain protocol enables achieving consensus in a so-called permissionless setting—anyone can join (or leave) the protocol execution, and the protocol instructions do not depend on the identities of the players. His ingenious protocol prevents "sybil attacks" (where an adversary spawns any number of new players) by relying on computational puzzles (a.k.a. "moderately hard functions") introduced by Dwork and Naor (Crypto'92). Recent work by Garay et al (EuroCrypt'15) and Pass et al (manuscript, 2016) demonstrate that this protocol provably achieves consistency and liveness assuming a) honest players control a majority of the computational power in the network, b) the puzzle-hardness is appropriately set as a function of the maximum network delay and the total computational power of the network, and c) the computational puzzle is modeled as a random oracle. Assuming honest participation, however, is a strong assumption, especially in a setting where honest players are expected to perform a lot of work (to solve the computational puzzles). In Nakamoto's Bitcoin application of the blockchain protocol, players are incentivized to solve these puzzles by receiving rewards for every "block" (of transactions) they contribute to the blockchain. An elegant work by Eyal and Sirer (FinancialCrypt'14), strengthening and formalizing an earlier attack discussed on the Bitcoin forum, demonstrates that a coalition controlling even a minority fraction of the computational power in the network can gain (close to) 2 times its "fair share" of the rewards (and transaction fees) by deviating from the protocol instructions. In contrast, in a fair protocol, one would expect that players controlling a φ fraction of the computational resources to reap a φ fraction of the rewards. We present a new blockchain protocol—the FruitChain protocol—which satisfies the same consistency and liveness properties as Nakamoto's protocol (assuming an honest majority of the computing power), and additionally is δ-approximately fair: with overwhelming probability, any honest set of players controlling a φ fraction of computational power is guaranteed to get at least a fraction (1-δ)φ of the blocks (and thus rewards) in any Ω(κ/δ) length segment of the chain (where κ is the security parameter). Consequently, if this blockchain protocol is used as the ledger underlying a cryptocurrency system, where rewards and transaction fees are evenly distributed among the miners of blocks in a length κ segment of the chain, no coalition controlling less than a majority of the computing power can gain more than a factor (1+3δ) by deviating from the protocol (i.e., honest participation is an n/2-coalition-safe 3δ-Nash equilibrium). Finally, the FruitChain protocol enables decreasing the variance of mining rewards and as such significantly lessens (or even obliterates) the need for mining pools.
2018-03-05
Javadi, Elahe, Lai, Jianwei.  2017.  Attribution Apprehension, Automated Attribution, and Creative Integration. Companion of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing. :207–210.

Some online communities are better than others in standardizing and automating the attribution process. This study examines how automated attribution can alleviate attribution apprehension and thus facilitate creative integration in open communities. Attribution apprehension, i.e., a user's anxiety over proper attribution of reused artifacts, adversely impacts the tendencies to engage in the integration process. Because open communities thrive on the basis of fairness, automated attribution features are essential in fostering creative integration. This study draws upon task-technology fit to craft a theoretical framework for explaining this phenomenon, reviews current tools for automated attribution in different communities and describes findings of a pilot survey on how those tools can encourage creative integration.

2017-10-03
Kumaresan, Ranjit, Bentov, Iddo.  2016.  Amortizing Secure Computation with Penalties. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :418–429.

Motivated by the impossibility of achieving fairness in secure computation [Cleve, STOC 1986], recent works study a model of fairness in which an adversarial party that aborts on receiving output is forced to pay a mutually predefined monetary penalty to every other party that did not receive the output. These works show how to design protocols for secure computation with penalties that guarantees that either fairness is guaranteed or that each honest party obtains a monetary penalty from the adversary. Protocols for this task are typically designed in an hybrid model where parties have access to a "claim-or-refund" transaction functionality denote FCR*. In this work, we obtain improvements on the efficiency of these constructions by amortizing the cost over multiple executions of secure computation with penalties. More precisely, for computational security parameter λ, we design a protocol that implements l = poly\vphantom\\(λ) instances of secure computation with penalties where the total number of calls to FCR* is independent of l.

Kumaresan, Ranjit, Vaikuntanathan, Vinod, Vasudevan, Prashant Nalini.  2016.  Improvements to Secure Computation with Penalties. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :406–417.

Motivated by the impossibility of achieving fairness in secure computation [Cleve, STOC 1986], recent works study a model of fairness in which an adversarial party that aborts on receiving output is forced to pay a mutually predefined monetary penalty to every other party that did not receive the output. These works show how to design protocols for secure computation with penalties that tolerate an arbitrary number of corruptions. In this work, we improve the efficiency of protocols for secure computation with penalties in a hybrid model where parties have access to the "claim-or-refund" transaction functionality. Our first improvement is for the ladder protocol of Bentov and Kumaresan (Crypto 2014) where we improve the dependence of the script complexity of the protocol (which corresponds to miner verification load and also space on the blockchain) on the number of parties from quadratic to linear (and in particular, is completely independent of the underlying function). Our second improvement is for the see-saw protocol of Kumaresan et al. (CCS 2015) where we reduce the total number of claim-or-refund transactions and also the script complexity from quadratic to linear in the number of parties.

2017-07-24
Kumaresan, Ranjit, Vaikuntanathan, Vinod, Vasudevan, Prashant Nalini.  2016.  Improvements to Secure Computation with Penalties. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :406–417.

Motivated by the impossibility of achieving fairness in secure computation [Cleve, STOC 1986], recent works study a model of fairness in which an adversarial party that aborts on receiving output is forced to pay a mutually predefined monetary penalty to every other party that did not receive the output. These works show how to design protocols for secure computation with penalties that tolerate an arbitrary number of corruptions. In this work, we improve the efficiency of protocols for secure computation with penalties in a hybrid model where parties have access to the "claim-or-refund" transaction functionality. Our first improvement is for the ladder protocol of Bentov and Kumaresan (Crypto 2014) where we improve the dependence of the script complexity of the protocol (which corresponds to miner verification load and also space on the blockchain) on the number of parties from quadratic to linear (and in particular, is completely independent of the underlying function). Our second improvement is for the see-saw protocol of Kumaresan et al. (CCS 2015) where we reduce the total number of claim-or-refund transactions and also the script complexity from quadratic to linear in the number of parties. We also present a 'dual-mode' protocol that offers different guarantees depending on the number of corrupt parties: (1) when s

2017-02-23
T. Long, G. Yao.  2015.  "Verification for Security-Relevant Properties and Hyperproperties". 2015 IEEE 12th Intl Conf on Ubiquitous Intelligence and Computing and 2015 IEEE 12th Intl Conf on Autonomic and Trusted Computing and 2015 IEEE 15th Intl Conf on Scalable Computing and Communications and Its Associated Workshops (UIC-ATC-ScalCom). :490-497.

Privacy analysis is essential in the society. Data privacy preservation for access control, guaranteed service in wireless sensor networks are important parts. In programs' verification, we not only consider about these kinds of safety and liveness properties but some security policies like noninterference, and observational determinism which have been proposed as hyper properties. Fairness is widely applied in verification for concurrent systems, wireless sensor networks and embedded systems. This paper studies verification and analysis for proving security-relevant properties and hyper properties by proposing deductive proof rules under fairness requirements (constraints).