Comparing a Composite Model Versus Chained Models to Locate a Nearest Visual Object - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

Comparing a Composite Model Versus Chained Models to Locate a Nearest Visual Object

Résumé

Extracting information from geographic images and text is crucial for autonomous vehicles to determine in advance the best cell stations to connect to along their future path. Multiple artificial neural network models can address this challenge; however, there is no definitive guidance on the selection of an appropriate model for such use cases. Therefore, we experimented two architectures to solve such a task: a first architecture with chained models where each model in the chain addresses a sub-task of the task; and a second architecture with a single model that addresses the whole task. Our results showed that these two architectures achieved the same level performance with a root mean square error (RMSE) of 0.055 and 0.056; The findings further revealed that when the task can be decomposed into sub-tasks, the chain architecture exhibits a twelve-fold increase in training speed compared to the composite model. Nevertheless, the composite model significantly alleviates the burden of data labeling.
Fichier non déposé

Dates et versions

hal-04460516 , version 1 (15-02-2024)

Identifiants

  • HAL Id : hal-04460516 , version 1

Citer

Antoine Le Borgne, Xavier Marjou, Fanny Parzysz, Tayeb Lemlouma. Comparing a Composite Model Versus Chained Models to Locate a Nearest Visual Object. 25th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD), Dec 2023, Taichung City, Taiwan. ⟨hal-04460516⟩
23 Consultations
0 Téléchargements

Partager

Gmail Mastodon Facebook X LinkedIn More