OBJECT MODELLING, DETECTION AND LOCALISATION IN MOBILE VIDEO: A STATE-OF-THE-ART
Résumé
This report is part of the state-of-the-art deliverable of the ITEA2 project SPY “Surveillance imProved System: Intelligent situation awareness”
whose purpose is to develop new urban surveillance systems using video cameras embedded within mobile security vehicles. This report is dedicated to the problem of finding objects of interest in a video. “Object” is understood in its familiar (i.e. semantic) sense: e.g. car, tree, human, road... and the system is supposed to automatically find the location of such objects in the captured video. To be consistent with the project technological level, we shall exclude the “developmental” approaches, where the system does not know the objects in advance, and constructs
incrementally its own internal representation. We then suppose that the system operates with a provided representation of the objects and its environment that has been constructed (learned) off-line, and that may evolve on-line. Such representation includes a set of object classes that the system is then expected to recognize and localise in every image, either by attributing accordingly a label to every location in the image (task referred to as “semantic segmentation”), or by localising –more or less precisely– instances of each class in the video and tagging every image accordingly (referred to as “semantic indexing”). We thus present a state-of-the-art of the video analysis methods for object and environment modelling and semantic indexing or segmentation with respect to the corresponding model.
Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...