BERT4CTR: An Efficient Framework to Combine Pre-trained Language Model with Non-textual Features for CTR Prediction - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

BERT4CTR: An Efficient Framework to Combine Pre-trained Language Model with Non-textual Features for CTR Prediction

Dong Wang
Yunqing Xia
Weiwei Deng
Qi Zhang

Résumé

Although deep pre-trained language models have shown promising benefit in a large set of industrial scenarios, including Click-Through-Rate (CTR) prediction, how to integrate pre-trained language models that handle only textual signals into a prediction pipeline with non-textual features is challenging. Up to now, two directions have been explored to integrate multimodal inputs in fine-tuning of pre-trained language models. One consists of fusing the outcome of language models and non-textual features through an aggregation layer, resulting into ensemble framework, where the cross-information between textual and nontextual inputs are learned only in the aggregation layer. The second one consists of splitting and transforming non-textual features into fine-grained tokens that are fed, along with textual tokens, directly into the transformer layers of language models. However, by adding additional tokens, this approach increases the complexity of the learning and inference. We propose in this paper, a novel framework, BERT4CTR, that addresses these limitations. The new framework leverages Uni-Attention mechanism to benefit from the interactions between non-textual and textual features, while maintaining low training and inference time-costs, through a dimensionality reduction. We demonstrate through comprehensive experiments on both public and commercial data that BERT4CTR outperforms significantly the state-of-the-art approaches to handle multi-modal inputs and is applicable to CTR prediction. In comparison with ensemble framework, BERT4CTR brings more than 0.4% AUC gain on both tested data sets with only 7% increase on latency.
Fichier principal
Vignette du fichier
3580305.3599780.pdf (1.34 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04219746 , version 1 (27-09-2023)

Identifiants

Citer

Dong Wang, Kavé Salamatian, Yunqing Xia, Weiwei Deng, Qi Zhang. BERT4CTR: An Efficient Framework to Combine Pre-trained Language Model with Non-textual Features for CTR Prediction. KDD '23: The 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Aug 2023, Long Beach CA USA, France. pp.5039-5050, ⟨10.1145/3580305.3599780⟩. ⟨hal-04219746⟩

Collections

UNIV-SAVOIE LISTIC
7 Consultations
55 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More