Biblio
When communication about security to end users is ineffective, people frequently misinterpret the protection offered by a system. The discrepancy between the security users perceive a system to have and the actual system state can lead to potentially risky behaviors. It is thus crucial to understand how security perceptions are shaped by interface elements such as text-based descriptions of encryption. This article addresses the question of how encryption should be described to non-experts in a way that enhances perceived security. We tested the following within-subject variables in an online experiment (N=309): a) how to best word encryption, b) whether encryption should be described with a focus on the process or outcome, or both c) whether the objective of encryption should be mentioned d) when mentioning the objective of encryption, how to best describe it e) whether a hash should be displayed to the user. We also investigated the role of context (between subjects). The verbs "encrypt" and "secure" performed comparatively well at enhancing perceived security. Overall, participants stated that they felt more secure not knowing about the objective of encryption. When it is necessary to state the objective, positive wording of the objective of encryption worked best. We discuss implications and why using these results to design for perceived lack of security might be of interest as well. This leads us to discuss ethical concerns, and we give guidelines for the design of user interfaces where encryption should be communicated to end users.
A recent study featuring a new kind of care robot indicated that participants expect a robot's ethical decision-making to be transparent to develop trust, even though the same type of `inspection of thoughts' isn't expected of a human carer. At first glance, this might suggest that robot transparency mechanisms are required for users to develop trust in robot-made ethical decisions. But the participants were found to desire transparency only when they didn't know the specifics of a human-robot social interaction. Humans trust others without observing their thoughts, which implies other means of determining trustworthiness. The study reported here suggests that the method is social interaction and observation, signifying that trust is a social construct. Moreover, that `social determinants of trust' are the transparent elements. This socially determined behaviour draws on notions of virtue ethics. If a caregiver (nurse or robot) consistently provides good, ethical care, then patients can trust that caregiver to do so often. The same social determinants may apply to care robots and thus it ought to be possible to trust them without the ability to see their thoughts. This study suggests why transparency mechanisms may not be effective in helping to develop trust in care robot ethical decision-making. It suggests that roboticists need to build sociable elements into care robots to help patients to develop patient trust in the care robot's ethical decision-making.
The Internet of Battlefield Things (IoBT) might be one of the most expensive cyber-physical systems of the next decade, yet much research remains to develop its fundamental enablers. A challenge that distinguishes the IoBT from its civilian counterparts is resilience to a much larger spectrum of threats.
This paper presents an initial framework for managing emergent ethical concerns during software engineering in society projects. We argue that such emergent considerations can neither be framed as absolute rules about how to act in relation to fixed and measurable conditions. Nor can they be addressed by simply framing them as non-functional requirements to be satisficed. Instead, a continuous process is needed that accepts the 'messiness' of social life and social research, seeks to understand complexity (rather than seek clarity), demands collective (not just individual) responsibility and focuses on dialogue over solutions. The framework has been derived based on retrospective analysis of ethical considerations in four software engineering in society projects in three different domains.
Persisting to ignore the consequences of Cyber Warfare will bring severe concerns to all people. Hackers and governments alike should understand the barriers of which their methods take them. Governments use Cyber Warfare to give them a tactical advantage over other countries, defend themselves from their enemies or to inflict damage upon their adversaries. Hackers use Cyber Warfare to gain personal information, commit crimes, or to reveal sensitive and beneficial intelligence. Although both methods can provide ethical uses, the equivalent can be said at the other end of the spectrum. Knowing and comprehending these devices will not only strengthen the ability to detect these attacks and combat against them but will also provide means to divulge despotic government plans, as the outcome of Cyber Warfare can be worse than the outcome of conventional warfare. The paper discussed the concept of ethics and reasons that led to use information technology in military war, the effects of using cyber war on civilians, the legality of the cyber war and ways of controlling the use of information technology that may be used against civilians. This research uses a survey methodology to overlook the awareness of Arab citizens towards the idea of cyber war, provide findings and evidences of ethics behind the offensive cyber warfare. Detailed strategies and approaches should be developed in this aspect. The author recommended urging the scientific and technological research centers to improve the security and develop defending systems to prevent the use of technology in military war against civilians.