YOLO-based Multi-Modal Analysis of Vineyards using RGB-D Detections
Résumé
Agricultural robotics is a rapidly growing research area due to the need for new practices that are more environmentally responsible. It involves a range of technologies including autonomous vehicles, drones and robotic arms. These systems can be equipped with sensors and cameras to gather data and perform tasks autonomously or with minimal human intervention. For robot navigation and manipulation, and plant monitoring and analysis, perception is of prime importance and is still a challenging task today. For instance, visual perception using color images only for disease detection in vineyards, such as Mildew in which the symptoms manifest as small spots on or beneath the leaves, is still a hard task that does not allow to achieve high detection accuracy. To extract more representative features to improve the detection accuracy, other modalities must be used in addition to the Red Green and Blue (RGB) information of color images. In this paper, we present first a multimodal acquisition system that we have developed. It is composed of a multi-spectral (MS) camera and an RGB-D camera that are mounted on a mobile robot for data acquisition in a vineyard. Next, we describe the multi-modal dataset that we have built based on the data acquired with our system in a commercial vineyard. Finally, we implemented an Early RGB and depth data fusion technique together with the YOLOv5m Deep Learning network to detect the main parts of the vine: leaves, branches, and grapes using our dataset. The results that we have obtained, compared to those obtained using RGB images only with the YOLOv5m architecture, demonstrate the benefits of adding multi data fusion techniques to the object detection pipeline. These results are encouraging and show that multi-sensor data fusion is a technique that is worth considering as it can be useful for improving grapevine disease recognition technologies.
Origine | Fichiers produits par l'(les) auteur(s) |
---|