An approach for dataset extension for object detection in artworks using open-vocabulary models - Archive ouverte HAL
Communication Dans Un Congrès Année : 2024

An approach for dataset extension for object detection in artworks using open-vocabulary models

Résumé

While studying objects presented in paintings, art history specialists identify their significance, symbolic meaning and historical context. Analyzing big artistic collections can be very time-consuming for the specialists. The search could be relieved by using modern object detectors. However, object detectors have poor performance on artistic images. This problem could be solved by fine-tuning them on specialized annotated datasets. In this paper, we explore the possibilities of using open-vocabulary foundation models for dataset annotation in a semi-automated manner. We propose an approach for artistic dataset annotation for object detection task based on a small set of images annotated on image-level and using Vision Transformer for Open-World Localization (OWL-ViT2) model, the YOLO object detector and an approximate nearest neighbour oh yeah (ANNOY) algorithm. We extend the existing DEArt dataset by 97.2% and introduce the way of adding new classes without exhaustive annotation. With the extended version of the dataset, we achieve 12.2% increase of mAP0.5 metric on average on the test data compared to the model trained on the original dataset.

Fichier principal
Vignette du fichier
ECCV_AI4DH2024_Yemelianenko.pdf (17.09 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04820558 , version 1 (05-12-2024)

Identifiants

  • HAL Id : hal-04820558 , version 1

Citer

Tetiana Yemelianenko, Iuliia Tkachenko, Tess Masclef, Mihaela Scuturici, Serge Miguet. An approach for dataset extension for object detection in artworks using open-vocabulary models. Proceedings of the European Conference on Computer Vision (ECCV) Workshops, ECCV, Sep 2024, Milan (Italie), Italy. ⟨hal-04820558⟩
0 Consultations
0 Téléchargements

Partager

More