Classification of Traversable Surfaces for Navigation Combining Multiple Cues Toward a Visual-Based Learning System
Résumé
This paper describes a vision-based ground-plane classification system for autonomous indoor mobile-robot that takes advantage of the synergy in combining together multiple visual-cues. A priori knowledge of the environment is important in many biological systems, in parallel with their mutually beneficial reactive systems. As such, a learning model approach is taken here for the classification of the ground/object space, initialised through a new Distributed-Fusion (D-Fusion) method that captures colour and textural data using Superpixels. A Markov Random Field (MRF) network is then used to classify, regularise, employ a priori constraints, and merge additional ground/object information provided by other visual cues (such as motion) to improve classification images. The developed system can classify indoor test-set ground-plane surfaces with an average true-positive to false-positive rate of 90.92% to 7.78% respectively on test-set data. The system has been designed in mind to fuse a variety of different visual-cues. Consequently it can be customised to fit different situations and/or sensory architectures accordingly.
Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...