A Parallel Implementation of a 3D Reconstruction Algorithm for Real-Time Vision
Résumé
Artificial vision requires of large amount of computing power, especially when operating on the fly on digital video streams. For these applications, real-time processing is needed to allow the system to interact with its environment, like in robotic applications or man/machine interfaces. Two braod classes of solutions have been used to solve the problem of balancing application needs and the constraints of the real-time processing: degrading algorithms or using dedicated hardware architecture like FPGA or GPU. These strategies were effective because of the specific properties of the images and the structure of the associated algorithms. However, the constant and fast progression of general purpose computers performance makes these specific solutions less and less interesting. Development time and cost now plead in favor of architectures based on standard components. During the last ten years, the use of these solutions increased with the generalization of clusters made up of off-the-shelf personal computers. But this type of solution has been rarely used in the context of complex vision applications operating on the fly. This paper evaluates this opportunity by proposing a cluster architecture dedicated to real-time vision applications. We describe the hardware architecture of such a solution – by justifying the technological choices carried out on the application requirements and the current state of the art – then the associated software architecture. The validity of the approach is shown with the description and performance evaluation of a real-time 3D reconstruction application.