Voice assistants such as Amazon Echo and Google Home have become increasingly popular for many home users, for home automation, entertainment, and convenience. These devices process speech commands from a user to execute some action, such as playing music, making online purchases, or triggering home automation such as lights or security locks. The process of mapping speech input to a text command is performed using a machine learning model. In this study, we explore the concept of how voice assistants could be exploited, where genuine audio commands are manipulated such that an attacker could trigger alternative responses from the voice assistant. We present a small-scale study to examine misinterpretations made by voice assistants. We also study user perception of how secure their voice devices are, and their approach to security and privacy.
Mccarthy, A., Gaster, B., & Legg, P. (2020, June). Shouting through letterboxes: A study on attack susceptibility of voice assistants. Paper presented at IEEE International Conference on Cyber Security and the Protection of Digital Services (Cyber Science 2020)