Visible to the public Infrasonic scene fingerprinting for authenticating speaker location

TitleInfrasonic scene fingerprinting for authenticating speaker location
Publication TypeConference Paper
Year of Publication2017
AuthorsAono, K., Chakrabartty, S., Yamasaki, T.
Conference Name2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
Keywordsacceleration-based cepstral features, Acoustic Filtering, Acoustic Fingerprints, ambient infrasound, authenticating speaker location, authentication, Cepstral analysis, classifier, composability, feature extraction, feature set analysis, fingerprint identification, frequency ranges, human auditory range, Human Behavior, infrasonic region, infrasonic scene fingerprinting, infrasonic signatures, Infrasound, localization, mobile computing, mobile devices, Mobile handsets, pubcrawl, Resiliency, robust navigation cues, scene recognition rates, smart phones, smartphone recordings, smartphones, speaker recognition, standard smartphone recording, Training, ultra-low frequency cues
AbstractAmbient infrasound with frequency ranges well below 20 Hz is known to carry robust navigation cues that can be exploited to authenticate the location of a speaker. Unfortunately, many of the mobile devices like smartphones have been optimized to work in the human auditory range, thereby suppressing information in the infrasonic region. In this paper, we show that these ultra-low frequency cues can still be extracted from a standard smartphone recording by using acceleration-based cepstral features. To validate our claim, we have collected smartphone recordings from more than 30 different scenes and used the cues for scene fingerprinting. We report scene recognition rates in excess of 90% and a feature set analysis reveals the importance of the infrasonic signatures towards achieving the state-of-the-art recognition performance.
DOI10.1109/ICASSP.2017.7952178
Citation Keyaono_infrasonic_2017