Visible to the public Biblio

Filters: Author is Yan, Chen  [Clear All Filters]
2022-05-10
Ji, Xiaoyu, Cheng, Yushi, Zhang, Yuepeng, Wang, Kai, Yan, Chen, Xu, Wenyuan, Fu, Kevin.  2021.  Poltergeist: Acoustic Adversarial Machine Learning against Cameras and Computer Vision. 2021 IEEE Symposium on Security and Privacy (SP). :160–175.
Autonomous vehicles increasingly exploit computer-vision-based object detection systems to perceive environments and make critical driving decisions. To increase the quality of images, image stabilizers with inertial sensors are added to alleviate image blurring caused by camera jitters. However, such a trend opens a new attack surface. This paper identifies a system-level vulnerability resulting from the combination of the emerging image stabilizer hardware susceptible to acoustic manipulation and the object detection algorithms subject to adversarial examples. By emitting deliberately designed acoustic signals, an adversary can control the output of an inertial sensor, which triggers unnecessary motion compensation and results in a blurred image, even if the camera is stable. The blurred images can then induce object misclassification affecting safety-critical decision making. We model the feasibility of such acoustic manipulation and design an attack framework that can accomplish three types of attacks, i.e., hiding, creating, and altering objects. Evaluation results demonstrate the effectiveness of our attacks against four academic object detectors (YOLO V3/V4/V5 and Fast R-CNN), and one commercial detector (Apollo). We further introduce the concept of AMpLe attacks, a new class of system-level security vulnerabilities resulting from a combination of adversarial machine learning and physics-based injection of information-carrying signals into hardware.
2020-03-18
Zhou, Xinyan, Ji, Xiaoyu, Yan, Chen, Deng, Jiangyi, Xu, Wenyuan.  2019.  NAuth: Secure Face-to-Face Device Authentication via Nonlinearity. IEEE INFOCOM 2019 - IEEE Conference on Computer Communications. :2080–2088.
With the increasing prevalence of mobile devices, face-to-face device-to-device (D2D) communication has been applied to a variety of daily scenarios such as mobile payment and short distance file transfer. In D2D communications, a critical security problem is verifying the legitimacy of devices when they share no secrets in advance. Previous research addressed the problem with device authentication and pairing schemes based on user intervention or exploiting physical properties of the radio or acoustic channels. However, a remaining challenge is to secure face-to-face D2D communication even in the middle of a crowd, within which an attacker may hide. In this paper, we present Nhuth, a nonlinearity-enhanced, location-sensitive authentication mechanism for such communication. Especially, we target at the secure authentication within a limited range such as 20 cm, which is the common case for face-to-face scenarios. Nhuth contains averification scheme based on the nonlinear distortion of speaker-microphone systems and a location-based-validation model. The verification scheme guarantees device authentication consistency by extracting acoustic nonlinearity patterns (ANP) while the validation model ensures device legitimacy by measuring the time difference of arrival (TDOA) at two microphones. We analyze the security of Nhuth theoretically and evaluate its performance experimentally. Results show that Nhuth can verify the device legitimacy in the presence of nearby attackers.
2018-02-27
Zhang, Guoming, Yan, Chen, Ji, Xiaoyu, Zhang, Tianchen, Zhang, Taimin, Xu, Wenyuan.  2017.  DolphinAttack: Inaudible Voice Commands. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. :103–117.

Speech recognition (SR) systems such as Siri or Google Now have become an increasingly popular human-computer interaction method, and have turned various systems into voice controllable systems (VCS). Prior work on attacking VCS shows that the hidden voice commands that are incomprehensible to people can control the systems. Hidden voice commands, though "hidden", are nonetheless audible. In this work, we design a totally inaudible attack, DolphinAttack, that modulates voice commands on ultrasonic carriers (e.g., f textgreater 20 kHz) to achieve inaudibility. By leveraging the nonlinearity of the microphone circuits, the modulated low-frequency audio commands can be successfully demodulated, recovered, and more importantly interpreted by the speech recognition systems. We validated DolphinAttack on popular speech recognition systems, including Siri, Google Now, Samsung S Voice, Huawei HiVoice, Cortana and Alexa. By injecting a sequence of inaudible voice commands, we show a few proof-of-concept attacks, which include activating Siri to initiate a FaceTime call on iPhone, activating Google Now to switch the phone to the airplane mode, and even manipulating the navigation system in an Audi automobile. We propose hardware and software defense solutions, and suggest to re-design voice controllable systems to be resilient to inaudible voice command attacks.