Systems like Amazon's Alexa, Google Home and Apple's Siri allow users to issue voice commands and pose questions to personal digital assistants. Since these systems often have access to sensitive data and can perform tasks with serious impact (e.g., spend money to make a purchase), attacks against them could have significant consequences. Unfortunately, recent research has shown that attacks against such voice-based interfaces are feasible. This project is exploring methods of securing the voice interfaces to smartphones and other devices to ensure that commands are only accepted from the devices' owners. The researchers recently introduced the notion of hidden voice commands: audio that is constructed to be interpreted as voice commands by a computer speech recognition system, but is incomprehensible to human listeners. In theory, attackers may use hidden voice commands to surreptitiously control victims' smartphones and other electronic devices. Is this a realistic threat in practice? This project is studying both the practicality of this threat and approaches to protection. First, the researchers are investigating whether an attacker could construct hidden voice commands efficiently and covertly and make the commands operate under realistic conditions to circumvent previously proposed defenses. Second, they are developing scalable detection techniques and defenses that reliably and efficiently prevent such attacks.