Biblio
Fog computing is a new computing paradigm that utilizes numerous mutually cooperating terminal devices or network edge devices to provide computing, storage, and communication services. Fog computing extends cloud computing services to the edge of the network, making up for the deficiencies of cloud computing in terms of location awareness, mobility support and latency. However, fog nodes are not active enough to perform tasks, and fog nodes recruited by cloud service providers cannot provide stable and continuous resources, which limits the development of fog computing. In the process of cloud service providers using the resources in the fog nodes to provide services to users, the cloud service providers and fog nodes are selfish and committed to maximizing their own payoffs. This situation makes it easy for the fog node to work negatively during the execution of the task. Limited by the low quality of resource provided by fog nodes, the payoff of cloud service providers has been severely affected. In response to this problem, an appropriate incentive mechanism needs to be established in the fog computing environment to solve the core problems faced by both cloud service providers and fog nodes in maximizing their respective utility, in order to achieve the incentive effect. Therefore, this paper proposes an incentive model based on repeated game, and designs a trigger strategy with credible threats, and obtains the conditions for incentive consistency. Under this condition, the fog node will be forced by the deterrence of the trigger strategy to voluntarily choose the strategy of actively executing the task, so as to avoid the loss of subsequent rewards when it is found to perform the task passively. Then, using evolutionary game theory to analyze the stability of the trigger strategy, it proves the dynamic validity of the incentive consistency condition.
The recent proliferation of human-carried mobile devices has given rise to mobile crowd sensing (MCS) systems that outsource the collection of sensory data to the public crowd equipped with various mobile devices. A fundamental issue in such systems is to effectively incentivize worker participation. However, instead of being an isolated module, the incentive mechanism usually interacts with other components which may affect its performance, such as data aggregation component that aggregates workers' data and data perturbation component that protects workers' privacy. Therefore, different from past literature, we capture such interactive effect, and propose INCEPTION, a novel MCS system framework that integrates an incentive, a data aggregation, and a data perturbation mechanism. Specifically, its incentive mechanism selects workers who are more likely to provide reliable data, and compensates their costs for both sensing and privacy leakage. Its data aggregation mechanism also incorporates workers' reliability to generate highly accurate aggregated results, and its data perturbation mechanism ensures satisfactory protection for workers' privacy and desirable accuracy for the final perturbed results. We validate the desirable properties of INCEPTION through theoretical analysis, as well as extensive simulations.
We study the value of data privacy in a game-theoretic model of trading private data, where a data collector purchases private data from strategic data subjects (individuals) through an incentive mechanism. The private data of each individual represents her knowledge about an underlying state, which is the information that the data collector desires to learn. Different from most of the existing work on privacy-aware surveys, our model does not assume the data collector to be trustworthy. Then, an individual takes full control of its own data privacy and reports only a privacy-preserving version of her data. In this paper, the value of ε units of privacy is measured by the minimum payment of all nonnegative payment mechanisms, under which an individual's best response at a Nash equilibrium is to report the data with a privacy level of ε. The higher ε is, the less private the reported data is. We derive lower and upper bounds on the value of privacy which are asymptotically tight as the number of data subjects becomes large. Specifically, the lower bound assures that it is impossible to use less amount of payment to buy ε units of privacy, and the upper bound is given by an achievable payment mechanism that we designed. Based on these fundamental limits, we further derive lower and upper bounds on the minimum total payment for the data collector to achieve a given learning accuracy target, and show that the total payment of the designed mechanism is at most one individual's payment away from the minimum.