Visible to the public Biblio

Filters: Keyword is ethical aspects  [Clear All Filters]
2021-03-29
Distler, V., Lallemand, C., Koenig, V..  2020.  Making Encryption Feel Secure: Investigating how Descriptions of Encryption Impact Perceived Security. 2020 IEEE European Symposium on Security and Privacy Workshops (EuroS PW). :220—229.

When communication about security to end users is ineffective, people frequently misinterpret the protection offered by a system. The discrepancy between the security users perceive a system to have and the actual system state can lead to potentially risky behaviors. It is thus crucial to understand how security perceptions are shaped by interface elements such as text-based descriptions of encryption. This article addresses the question of how encryption should be described to non-experts in a way that enhances perceived security. We tested the following within-subject variables in an online experiment (N=309): a) how to best word encryption, b) whether encryption should be described with a focus on the process or outcome, or both c) whether the objective of encryption should be mentioned d) when mentioning the objective of encryption, how to best describe it e) whether a hash should be displayed to the user. We also investigated the role of context (between subjects). The verbs "encrypt" and "secure" performed comparatively well at enhancing perceived security. Overall, participants stated that they felt more secure not knowing about the objective of encryption. When it is necessary to state the objective, positive wording of the objective of encryption worked best. We discuss implications and why using these results to design for perceived lack of security might be of interest as well. This leads us to discuss ethical concerns, and we give guidelines for the design of user interfaces where encryption should be communicated to end users.

2020-12-01
Poulsen, A., Burmeister, O. K., Tien, D..  2018.  Care Robot Transparency Isn't Enough for Trust. 2018 IEEE Region Ten Symposium (Tensymp). :293—297.

A recent study featuring a new kind of care robot indicated that participants expect a robot's ethical decision-making to be transparent to develop trust, even though the same type of `inspection of thoughts' isn't expected of a human carer. At first glance, this might suggest that robot transparency mechanisms are required for users to develop trust in robot-made ethical decisions. But the participants were found to desire transparency only when they didn't know the specifics of a human-robot social interaction. Humans trust others without observing their thoughts, which implies other means of determining trustworthiness. The study reported here suggests that the method is social interaction and observation, signifying that trust is a social construct. Moreover, that `social determinants of trust' are the transparent elements. This socially determined behaviour draws on notions of virtue ethics. If a caregiver (nurse or robot) consistently provides good, ethical care, then patients can trust that caregiver to do so often. The same social determinants may apply to care robots and thus it ought to be possible to trust them without the ability to see their thoughts. This study suggests why transparency mechanisms may not be effective in helping to develop trust in care robot ethical decision-making. It suggests that roboticists need to build sociable elements into care robots to help patients to develop patient trust in the care robot's ethical decision-making.

2020-11-17
Abdelzaher, T., Ayanian, N., Basar, T., Diggavi, S., Diesner, J., Ganesan, D., Govindan, R., Jha, S., Lepoint, T., Marlin, B. et al..  2018.  Toward an Internet of Battlefield Things: A Resilience Perspective. Computer. 51:24—36.

The Internet of Battlefield Things (IoBT) might be one of the most expensive cyber-physical systems of the next decade, yet much research remains to develop its fundamental enablers. A challenge that distinguishes the IoBT from its civilian counterparts is resilience to a much larger spectrum of threats.

2019-12-18
Shepherd, Morgan M., Klein, Gary.  2012.  Using Deterrence to Mitigate Employee Internet Abuse. 2012 45th Hawaii International Conference on System Sciences. :5261–5266.
This study looks at the question of how to reduce/eliminate employee Internet Abuse. Companies have used acceptable use policies (AUP) and technology in an attempt to mitigate employees' personal use of company resources. Research shows that AUPs do not do a good job at this but that technology does. Research also shows that while technology can be used to greatly restrict personal use of the internet in the workplace, employee satisfaction with the workplace suffers when this is done. In this research experiment we used technology not to restrict employee use of company resources for personal use, but to make the employees more aware of the current Acceptable Use Policy, and measured the decrease in employee internet abuse. The results show that this method can result in a drop from 27 to 21 percent personal use of the company networks.
2017-03-07
Rashid, A., Moore, K., May-Chahal, C., Chitchyan, R..  2015.  Managing Emergent Ethical Concerns for Software Engineering in Society. 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering. 2:523–526.

This paper presents an initial framework for managing emergent ethical concerns during software engineering in society projects. We argue that such emergent considerations can neither be framed as absolute rules about how to act in relation to fixed and measurable conditions. Nor can they be addressed by simply framing them as non-functional requirements to be satisficed. Instead, a continuous process is needed that accepts the 'messiness' of social life and social research, seeks to understand complexity (rather than seek clarity), demands collective (not just individual) responsibility and focuses on dialogue over solutions. The framework has been derived based on retrospective analysis of ethical considerations in four software engineering in society projects in three different domains.

2015-05-05
Al Barghuthi, N.B., Said, H..  2014.  Ethics behind Cyber Warfare: A study of Arab citizens awareness. Ethics in Science, Technology and Engineering, 2014 IEEE International Symposium on. :1-7.

Persisting to ignore the consequences of Cyber Warfare will bring severe concerns to all people. Hackers and governments alike should understand the barriers of which their methods take them. Governments use Cyber Warfare to give them a tactical advantage over other countries, defend themselves from their enemies or to inflict damage upon their adversaries. Hackers use Cyber Warfare to gain personal information, commit crimes, or to reveal sensitive and beneficial intelligence. Although both methods can provide ethical uses, the equivalent can be said at the other end of the spectrum. Knowing and comprehending these devices will not only strengthen the ability to detect these attacks and combat against them but will also provide means to divulge despotic government plans, as the outcome of Cyber Warfare can be worse than the outcome of conventional warfare. The paper discussed the concept of ethics and reasons that led to use information technology in military war, the effects of using cyber war on civilians, the legality of the cyber war and ways of controlling the use of information technology that may be used against civilians. This research uses a survey methodology to overlook the awareness of Arab citizens towards the idea of cyber war, provide findings and evidences of ethics behind the offensive cyber warfare. Detailed strategies and approaches should be developed in this aspect. The author recommended urging the scientific and technological research centers to improve the security and develop defending systems to prevent the use of technology in military war against civilians.