Self-supervised Moving Vehicle Tracking with Stereo Sound


Chuanng Gan      Hang Zhao       Peihao Chen      David Cox      Antonio Torralba     



Abstract:


Humans are able to localize objects in the environment using both visual and auditory cues, integrating information from multiple modalities into a common reference frame. We introduce a system that can leverage unlabeled audiovisual data to learn to localize objects (moving vehicles) in a visual reference frame, purely using stereo sound at inference time. Since it is labor-intensive to manually annotate the correspondences between audio and object bounding boxes, we achieve this goal by using the co-occurrence of visual and audio streams in unlabeled videos as a form of self-supervision, without resorting to the collection of ground truth annotations. In particular, we propose a framework that consists of a vision ''teacher'' network and a stereo-sound ''student'' network. During training, knowledge embodied in a well-established visual vehicle detection model is transferred to the audio domain using unlabeled videos as a bridge. At test time, the stereo-sound student network can work independently to perform object localization using just stereo audio and camera meta-data, without any visual input. Experimental results on a newly collected Au-ditory Vehicle Tracking dataset verify that our proposed approach outperforms several baseline approaches. We also demonstrate that our cross-modal auditory localization approach can assist in the visual localization of moving vehicles under poor lighting conditions.


Video:




Paper:


Self-supervised Moving Vehicle Tracking with Stereo Sound
Chuang Gan, Hang Zhao, Peihao Chen, David Cox, Antonio Torralba
ICCV 2019
[PDF]


Dataset


Coming soon.