Benchmarking losses for deep learning laxness detection - Archive ouverte HAL Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2017

Benchmarking losses for deep learning laxness detection

Résumé

In object detection, classical rule for accepting a match between a ground truth area and a detection is a 0.5 jacard ratio. But does user care about this ? Especially, can this rule be well handled by deep network training ? And if no, may user accept some relaxation of the problem if it can help the training ? In this paper, we benchmark several strategies to perform object detection with end to end deep network when metric is relaxed. Our preliminary results on different public datasets show that under this relaxations, some strategies are very facilitative to train the network outperforming the same network learned with classical strategies.
Fichier principal
Vignette du fichier
raw.pdf (1.34 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01412086 , version 1 (07-12-2016)
hal-01412086 , version 2 (17-07-2017)
hal-01412086 , version 3 (31-07-2017)
hal-01412086 , version 4 (18-10-2017)
hal-01412086 , version 5 (14-12-2017)
hal-01412086 , version 6 (15-12-2017)

Identifiants

  • HAL Id : hal-01412086 , version 3

Citer

Adrien Chan-Hon-Tong. Benchmarking losses for deep learning laxness detection. 2017. ⟨hal-01412086v3⟩
294 Consultations
399 Téléchargements

Partager

Gmail Facebook X LinkedIn More