“Looking at the Right Stuff” – Guided Semantic-Gaze for Autonomous Driving

Abstract

In recent years, predicting driver’s focus of attention has been a very active area of research in the autonomous driving community. Unfortunately, existing state-of-the-art techniques achieve this by relying only on human gaze information, thereby ignoring scene semantics. We propose a novel Semantics Augmented GazE (SAGE) detection approach that captures driving specific contextual information, in addition to the raw gaze. Such a combined attention mechanism serves as a powerful tool to focus on the relevant regions in an image frame in order to make driving both safe and efficient. Using this, we design a complete saliency prediction framework – SAGE-Net, which modifies the initial prediction from SAGE by taking into account vital aspects such as distance to objects (depth), ego vehicle speed, and pedestrian crossing intent. Exhaustive experiments conducted through four popular saliency algorithms show that on 49/56 (87.5%) cases – considering both the overall dataset and crucial driving scenarios, SAGE outperforms existing techniques without any additional computational overhead during the training process. The augmented dataset along with the relevant code are available as part of the supplementary material.

Authors

Anwesan Pal
Contextual Robotics Institute,
UC San Diego
a2pal@eng.ucsd.edu

Sayan Mondal
Jacob School of Engineering,
UC San Diego
samondal@eng.ucsd.edu

Henrik I. Christensen
Contextual Robotics Institute,
UC San Diego
hichristensen@eng.ucsd.edu

Paper

Download (PDF, 8.32MB)

Video

Code/ Data

Supplementary material including code and the videos of the different experiments are available at https://sites.google.com/eng.ucsd.edu/sage-net.

Acknowledgement

The authors would like to thank Army Research Laboratory (ARL) W911NF-10-2-0016 Distributed and Collaborative Intelligent Systems and Technology (DCIST) Collaborative Technology Alliance for supporting this research.

Citation

Copyright

The documents contained in these directories are included by the contributing authors as a means to ensure timely dissemination of scholarly and technical work on a non-commercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author’s copyright. These works may not be re posted without explicit permission of the copyright holder.

Posted in Conference, CVPR, Multi-robot Semantic Mapping, Projects, Publications.