CARTE: Pretraining and Transfer for Tabular Learning - Archive ouverte HAL
Communication Dans Un Congrès Année : 2024

CARTE: Pretraining and Transfer for Tabular Learning

Résumé

Pretrained deep-learning models are the go-to solution for images or text. However, for tabular data the standard is still to train tree-based models. Indeed, transfer learning on tables hits the challenge of data integration: finding correspondences, correspondences in the entries (entity matching) where different words may denote the same entity, correspondences across columns (schema matching), which may come in different orders, names... We propose a neural architecture that does not need such correspondences. As a result, we can pretrain it on background data that has not been matched. The architecture --CARTE for Context Aware Representation of Table Entries-- uses a graph representation of tabular (or relational) data to process tables with different columns, string embedding of entries and columns names to model an open vocabulary, and a graph-attentional network to contextualize entries with column names and neighboring entries. An extensive benchmark shows that CARTE facilitates learning, outperforming a solid set of baselines including the best tree-based models. CARTE also enables joint learning across tables with unmatched columns, enhancing a small table with bigger ones. CARTE opens the door to large pretrained models for tabular data.
Fichier principal
Vignette du fichier
kim24d.pdf (1.1 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Licence

Dates et versions

hal-04596816 , version 1 (31-05-2024)
hal-04596816 , version 2 (24-09-2024)

Licence

Identifiants

  • HAL Id : hal-04596816 , version 2

Citer

Myung Jun Kim, Léo Grinsztajn, Gaël Varoquaux. CARTE: Pretraining and Transfer for Tabular Learning. Forty-first International Conference on Machine Learning, ICML 2024, Jul 2024, Vienna, Austria. ⟨hal-04596816v2⟩
97 Consultations
44 Téléchargements

Partager

More