Biblio

Filters: Author is Saxena, Nitesh  [Clear All Filters]
2022-08-26
Prakash, Jay, Yu, Clarice Chua Qing, Thombre, Tanvi Ravindra, Bytes, Andrei, Jubur, Mohammed, Saxena, Nitesh, Blessing, Lucienne, Zhou, Jianying, Quek, Tony Q.S.  2021.  Countering Concurrent Login Attacks in “Just Tap” Push-based Authentication: A Redesign and Usability Evaluations. 2021 IEEE European Symposium on Security and Privacy (EuroS&P). :21—36.
In this paper, we highlight a fundamental vulnerability associated with the widely adopted “Just Tap” push-based authentication in the face of a concurrency attack, and propose the method REPLICATE, a redesign to counter this vulnerability. In the concurrency attack, the attacker launches the login session at the same time the user initiates a session, and the user may be fooled, with high likelihood, into accepting the push notification which corresponds to the attacker's session, thinking it is their own. The attack stems from the fact that the login notification is not explicitly mapped to the login session running on the browser in the Just Tap approach. REPLICATE attempts to address this fundamental flaw by having the user approve the login attempt by replicating the information presented on the browser session over to the login notification, such as by moving a key in a particular direction, choosing a particular shape, etc. We report on the design and a systematic usability study of REPLICATE. Even without being aware of the vulnerability, in general, participants placed multiple variants of REPLICATE in competition to the Just Tap and fairly above PIN-based authentication.
2022-01-31
Shrestha, Prakash, Saxena, Nitesh, Shukla, Diksha, Phoha, Vir V..  2021.  Press \$@\$@\$\$ to Login: Strong Wearable Second Factor Authentication via Short Memorywise Effortless Typing Gestures. 2021 IEEE European Symposium on Security and Privacy (EuroS P). :71—87.
The use of wearable devices (e.g., smartwatches) in two factor authentication (2FA) is fast emerging, as wearables promise better usability compared to smartphones. Still, the current deployments of wearable 2FA have significant usability and security issues. Specifically, one-time PIN-based wearable 2FA (PIN-2FA) requires noticeable user effort to open the app and copy random PINs from the wearable to the login terminal's (desktop/laptop) browser. An alternative approach, based on one-tap approvals via push notifications (Tap-2FA), relies upon user decision making to thwart attacks and is prone to skip-through. Both approaches are also vulnerable to traditional phishing attacks. To address this security-usability tension, we introduce a fundamentally different design of wearable 2FA, called SG-2FA, involving wrist-movement “seamless gestures” captured near transparently by the second factor wearable device while the user types a very short special sequence on the browser during the login process. The typing of the special sequence creates a wrist gesture that when identified correctly uniquely associates the login attempt with the device's owner. The special sequence can be fixed (e.g., “\$@\$@\$\$”), does not need to be a secret, and does not need to be memorized (could be simply displayed on the browser). This design improves usability over PIN-2FA since only this short sequence has to be typed as part of the login process (no interaction with or diversion of attention to the wearable and copying of random PINs is needed). It also greatly improves security compared to Tap-2FA since the attacker can not succeed in login unless the user's wrist is undergoing the exact same gesture at the exact same time. Moreover, the approach is phishing-resistant and privacy-preserving (unlike behavioral biometrics). Our results show that SG-2FA incurs only minimal errors in both benign and adversarial settings based on appropriate parameterizations.
2020-03-02
Shrestha, Babins, Mohamed, Manar, Saxena, Nitesh.  2019.  ZEMFA: Zero-Effort Multi-Factor Authentication based on Multi-Modal Gait Biometrics. 2019 17th International Conference on Privacy, Security and Trust (PST). :1–10.
In this paper, we consider the problem of transparently authenticating a user to a local terminal (e.g., a desktop computer) as she approaches towards the terminal. Given its appealing usability, such zero-effort authentication has already been deployed in the real-world where a computer terminal or a vehicle can be unlocked by the mere proximity of an authentication token (e.g., a smartphone). However, existing systems based on a single authentication factor contains one major security weakness - unauthorized physical access to the token, e.g., during lunch-time or upon theft, allows the attacker to have unfettered access to the terminal. We introduce ZEMFA, a zero-effort multi-factor authentication system based on multiple authentication tokens and multi-modal behavioral biometrics. Specifically, ZEMFA utilizes two types of authentication tokens, a smartphone and a smartwatch (or a bracelet) and two types of gait patterns captured by these tokens, mid/lower body movements measured by the phone and wrist/arm movements captured by the watch. Since a user's walking or gait pattern is believed to be unique, only that user (no impostor) would be able to gain access to the terminal even when the impostor is given access to both of the authentication tokens. We present the design and implementation of ZEMFA. We demonstrate that ZEMFA offers a high degree of detection accuracy, based on multi-sensor and multi-device fusion. We also show that ZEMFA can resist active attacks that attempt to mimic a user's walking pattern, especially when multiple devices are used.
2019-01-16
Shrestha, Prakash, Saxena, Nitesh.  2018.  Listening Watch: Wearable Two-Factor Authentication Using Speech Signals Resilient to Near-Far Attacks. Proceedings of the 11th ACM Conference on Security & Privacy in Wireless and Mobile Networks. :99–110.
Reducing the level of user effort involved in traditional two-factor authentication (TFA) constitutes an important research topic. A recent effort in this direction leverages ambient sounds to detect the proximity between the second factor device (phone) and the login terminal (browser), and eliminates the need for the user to transfer PIN codes. This approach is highly usable, but is completely vulnerable against far-near attackers, i.e., ones who are remotely located and can guess the victim's audio environment or make the phone create predictable sounds (e.g., ringers), and those who are in physical proximity of the user. In this paper, we propose Listening-Watch, a new TFA mechanism based on a wearable device (watch/bracelet) and active browser-generated random speech sounds. As the user attempts to login, the browser populates a short random code encoded into speech, and the login succeeds if the watch's audio recording contains this code (decoded using speech recognition), and is similar enough to the browser's audio recording. The remote attacker, who has guessed the user's environment or created predictable phone/watch sounds, will be defeated since authentication success relies upon the presence of the random code in watch's recordings. The proximity attacker will also be defeated unless it is extremely close to the watch, since the wearable microphones are usually designed to be only capable of picking up nearby sounds (e.g., voice commands). Furthermore, due to the use of a wearable second factor device, Listening-Watch naturally enables two-factor security even when logging in from a mobile phone. Our contributions are three-fold. First, we introduce the idea of strong and low-effort TFA based on wearable devices, active speech sounds and speech recognition, giving rise to the Listening-Watch system that is secure against both remote and proximity attackers. Second, we design and implement Listening-Watch for an Android smartwatch (and companion smartphone) and the Chrome browser, without the need for any browser plugins. Third, we evaluate Listening-Watch for authentication errors in both benign and adversarial settings. Our results show that Listening-Watch can result in minimal errors in both settings based on appropriate thresholdization and speaker volume levels.
2018-06-07
Liu, Jian, Wang, Chen, Chen, Yingying, Saxena, Nitesh.  2017.  VibWrite: Towards Finger-input Authentication on Ubiquitous Surfaces via Physical Vibration. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. :73–87.

The goal of this work is to enable user authentication via finger inputs on ubiquitous surfaces leveraging low-cost physical vibration. We propose VibWrite that extends finger-input authentication beyond touch screens to any solid surface for smart access systems (e.g., access to apartments, vehicles or smart appliances). It integrates passcode, behavioral and physiological characteristics, and surface dependency together to provide a low-cost, tangible and enhanced security solution. VibWrite builds upon a touch sensing technique with vibration signals that can operate on surfaces constructed from a broad range of materials. It is significantly different from traditional password-based approaches, which only authenticate the password itself rather than the legitimate user, and the behavioral biometrics-based solutions, which usually involve specific or expensive hardware (e.g., touch screen or fingerprint reader), incurring privacy concerns and suffering from smudge attacks. VibWrite is based on new algorithms to discriminate fine-grained finger inputs and supports three independent passcode secrets including PIN number, lock pattern, and simple gestures by extracting unique features in the frequency domain to capture both behavioral and physiological characteristics such as contacting area, touching force, and etc. VibWrite is implemented using a single pair of low-cost vibration motor and receiver that can be easily attached to any surface (e.g., a door panel, a desk or an appliance). Our extensive experiments demonstrate that VibWrite can authenticate users with high accuracy (e.g., over 95% within two trials), low false positive rate (e.g., less 3%) and is robust to various types of attacks.

2017-09-19
Mohamed, Manar, Saxena, Nitesh.  2016.  Gametrics: Towards Attack-resilient Behavioral Authentication with Simple Cognitive Games. Proceedings of the 32Nd Annual Conference on Computer Security Applications. :277–288.

Authenticating a user based on her unique behavioral bio-metric traits has been extensively researched over the past few years. The most researched behavioral biometrics techniques are based on keystroke and mouse dynamics. These schemes, however, have been shown to be vulnerable to human-based and robotic attacks that attempt to mimic the user's behavioral pattern to impersonate the user. In this paper, we aim to verify the user's identity through the use of active, cognition-based user interaction in the authentication process. Such interaction boasts to provide two key advantages. First, it may enhance the security of the authentication process as multiple rounds of active interaction would serve as a mechanism to prevent against several types of attacks, including zero-effort attack, expert trained attackers, and automated attacks. Second, it may enhance the usability of the authentication process by actively engaging the user in the process. We explore the cognitive authentication paradigm through very simplistic interactive challenges, called Dynamic Cognitive Games, which involve objects floating around within the images, where the user's task is to match the objects with their respective target(s) and drag/drop them to the target location(s). Specifically, we introduce, build and study Gametrics ("Game-based biometrics"), an authentication mechanism based on the unique way the user solves such simple challenges captured by multiple features related to her cognitive abilities and mouse dynamics. Based on a comprehensive data set collected in both online and lab settings, we show that Gametrics can identify the users with a high accuracy (false negative rates, FNR, as low as 0.02) while rejecting zero-effort attackers (false positive rates, FPR, as low as 0.02). Moreover, Gametrics shows promising results in defending against expert attackers that try to learn and later mimic the user's pattern of solving the challenges (FPR for expert human attacker as low as 0.03). Furthermore, we argue that the proposed biometrics is hard to be replayed or spoofed by automated means, such as robots or malware attacks.

2017-09-05
Mohamed, Manar, Shrestha, Babins, Saxena, Nitesh.  2016.  SMASheD: Sniffing and Manipulating Android Sensor Data. Proceedings of the Sixth ACM Conference on Data and Application Security and Privacy. :152–159.

The current Android sensor security model either allows only restrictive read access to sensitive sensors (e.g., an app can only read its own touch data) or requires special install-time permissions (e.g., to read microphone, camera or GPS). Moreover, Android does not allow write access to any of the sensors. Sensing-based security applications therefore crucially rely upon the sanity of the Android sensor security model. In this paper, we show that such a model can be effectively circumvented. Specifically, we build SMASheD, a legitimate framework under the current Android ecosystem that can be used to stealthily sniff as well as manipulate many of the Android's restricted sensors (even touch input). SMASheD exploits the Android Debug Bridge (ADB) functionality and enables a malicious app with only the INTERNET permission to read, and write to, multiple different sensor data files at will. SMASheD is the first framework, to our knowledge, that can sniff and manipulate protected sensors on unrooted Android devices, without user awareness, without constant device-PC connection and without the need to infect the PC. The primary contributions of this work are two-fold. First, we design and develop the SMASheD framework. Second, as an offensive implication of the SMASheD framework, we introduce a wide array of potentially devastating attacks. Our attacks against the touchsensor range from accurately logging the touchscreen input (TouchLogger) to injecting touch events for accessing restricted sensors and resources, installing and granting special permissions to other malicious apps, accessing user accounts, and authenticating on behalf of the user –- essentially almost doing whatever the device user can do (secretively). Our attacks against various physical sensors (motion, position and environmental) can subvert the functionality provided by numerous existing sensing-based security applications, including those used for(continuous) authentication, and authorization.