HopInAndAction: a benchmark for action recognition in the cockpit of a self-driving car driving outdoors
Résumé
Self-driving cars (SDC) are already present in certain cities of the world as part of robot taxi services. A few self-driving capabilities are also becoming more frequent in high-end consumer vehicles. But even though several datasets exist to develop methods that can enable a car to “see”, few datasets depict the actions of the occupants of an SDC in outdoor conditions. This work proposes HopInAndAction, a public dataset to evaluate action classification methods on videos of people realizing non-driving activities in the cockpit of an SDC. The dataset contains RGB recordings of 31 people carrying out 19 action classes based on 7 daily objects (e.g., telephone, newspaper, or tablet). Recordings are done in a vehicle driving autonomously in outdoor conditions and illustrate a varied set of monitoring conditions. For instance, strong and varying illumination and action occlusions due to sub-optimal sensor placement or self-body occlusion. Preliminary experiments show that HopInAndAction presents challenging monitoring conditions to the evaluated methods, as they struggle to discern between actions with similar appearances or to identify a person-object interaction in a scene with multiple objects.