Visible to the public VR, Deepfakes and Epistemic Security

TitleVR, Deepfakes and Epistemic Security
Publication TypeConference Paper
Year of Publication2022
AuthorsAliman, Nadisha-Marie, Kester, Leon
Conference Name2022 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)
KeywordsAIVR Ethics, composability, Cyber physical system, cyber physical systems, Deepfakes, Epistemic Security, Ethics, Human Behavior, human factors, immersive systems, privacy, pubcrawl, resilience, Resiliency, Safety, threat modeling, Training, Veins, virtual reality, VR
AbstractIn recent years, technological advancements in the AI and VR fields have increasingly often been paired with considerations on ethics and safety aimed at mitigating unintentional design failures. However, cybersecurity-oriented AI and VR safety research has emphasized the need to additionally appraise instantiations of intentional malice exhibited by unethical actors at pre- and post-deployment stages. On top of that, in view of ongoing malicious deepfake developments that can represent a threat to the epistemic security of a society, security-aware AI and VR design strategies require an epistemically-sensitive stance. In this vein, this paper provides a theoretical basis for two novel AIVR safety research directions: 1) VR as immersive testbed for a VR-deepfake-aided epistemic security training and 2) AI as catalyst within a deepfake-aided so-called cyborgnetic creativity augmentation facilitating an epistemically-sensitive threat modelling. For illustration, we focus our use case on deepfake text - an underestimated deepfake modality. In the main, the two proposed transdisciplinary lines of research exemplify how AIVR safety to defend against unethical actors could naturally converge toward AIVR ethics whilst counteracting epistemic security threats.
NotesISSN: 2771-7453
DOI10.1109/AIVR56993.2022.00019
Citation Keyaliman_vr_2022