Visible to the public Biblio

Filters: Keyword is social network sites  [Clear All Filters]
2019-07-01
Ferreyra, N. E. Díaz, Meisy, R., Heiselz, M..  2018.  At Your Own Risk: Shaping Privacy Heuristics for Online Self-Disclosure. 2018 16th Annual Conference on Privacy, Security and Trust (PST). :1-10.

Revealing private and sensitive information on Social Network Sites (SNSs) like Facebook is a common practice which sometimes results in unwanted incidents for the users. One approach for helping users to avoid regrettable scenarios is through awareness mechanisms which inform a priori about the potential privacy risks of a self-disclosure act. Privacy heuristics are instruments which describe recurrent regrettable scenarios and can support the generation of privacy awareness. One important component of a heuristic is the group of people who should not access specific private information under a certain privacy risk. However, specifying an exhaustive list of unwanted recipients for a given regrettable scenario can be a tedious task which necessarily demands the user's intervention. In this paper, we introduce an approach based on decision trees to instantiate the audience component of privacy heuristics with minor intervention from the users. We introduce Disclosure- Acceptance Trees, a data structure representative of the audience component of a heuristic and describe a method for their generation out of user-centred privacy preferences.

2017-06-27
Haimson, Oliver L., Brubaker, Jed R., Dombrowski, Lynn, Hayes, Gillian R..  2016.  Digital Footprints and Changing Networks During Online Identity Transitions. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. :2895–2907.

Digital artifacts on social media can challenge individuals during identity transitions, particularly those who prefer to delete, separate from, or hide data that are representative of a past identity. This work investigates concerns and practices reported by transgender people who transitioned while active on Facebook. We analyze open-ended survey responses from 283 participants, highlighting types of data considered problematic when separating oneself from a past identity, and challenges and strategies people engage in when managing personal data in a networked environment. We find that people shape their digital footprints in two ways: by editing the self-presentational data that is representative of a prior identity, and by managing the configuration of people who have access to that self-presentation. We outline the challenging interplay between shifting identities, social networks, and the data that suture them together. We apply these results to a discussion of the complexities of managing and forgetting the digital past.

2017-05-19
Zhang, Sixuan, Yu, Liang, Wakefield, Robin L., Leidner, Dorothy E..  2016.  Friend or Foe: Cyberbullying in Social Network Sites. SIGMIS Database. 47:51–71.

As the use of social media technologies proliferates in organizations, it is important to understand the nefarious behaviors, such as cyberbullying, that may accompany such technology use and how to discourage these behaviors. We draw from neutralization theory and the criminological theory of general deterrence to develop and empirically test a research model to explain why cyberbullying may occur and how the behavior may be discouraged. We created a research model of three second-order formative constructs to examine their predictive influence on intentions to cyberbully. We used PLS- SEM to analyze the responses of 174 Facebook users in two different cyberbullying scenarios. Our model suggests that neutralization techniques enable cyberbullying behavior and while sanction certainty is an important deterrent, sanction severity appears ineffective. We discuss the theoretical and practical implications of our model and results.

2017-02-27
Chessa, M., Grossklags, J., Loiseau, P..  2015.  A Game-Theoretic Study on Non-monetary Incentives in Data Analytics Projects with Privacy Implications. 2015 IEEE 28th Computer Security Foundations Symposium. :90–104.

The amount of personal information contributed by individuals to digital repositories such as social network sites has grown substantially. The existence of this data offers unprecedented opportunities for data analytics research in various domains of societal importance including medicine and public policy. The results of these analyses can be considered a public good which benefits data contributors as well as individuals who are not making their data available. At the same time, the release of personal information carries perceived and actual privacy risks to the contributors. Our research addresses this problem area. In our work, we study a game-theoretic model in which individuals take control over participation in data analytics projects in two ways: 1) individuals can contribute data at a self-chosen level of precision, and 2) individuals can decide whether they want to contribute at all (or not). From the analyst's perspective, we investigate to which degree the research analyst has flexibility to set requirements for data precision, so that individuals are still willing to contribute to the project, and the quality of the estimation improves. We study this tradeoffs scenario for populations of homogeneous and heterogeneous individuals, and determine Nash equilibrium that reflect the optimal level of participation and precision of contributions. We further prove that the analyst can substantially increase the accuracy of the analysis by imposing a lower bound on the precision of the data that users can reveal.