Alignment and stability of embeddings: Measurement and inference improvement - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue Neurocomputing Année : 2023

Alignment and stability of embeddings: Measurement and inference improvement

Résumé

Representation learning (RL) methods learn objects' latent embeddings where information is preserved by distances. Since distances are invariant to certain linear transformations, one may obtain different embeddings while preserving the same information. In dynamic systems, a temporal difference in embeddings may be explained by the stability of the system or by the misalignment of embeddings due to arbitrary transformations. In the literature, embedding alignment has not been defined formally, explored theoretically, or analyzed empirically. Here, we explore the embedding alignment and its parts, provide the first formal definitions, propose novel metrics to measure alignment and stability, and show their suitability through synthetic experiments. Real-world experiments show that both static and dynamic RL methods are prone to produce misaligned embeddings and such misalignment worsens the performance of dynamic network inference tasks. By ensuring alignment, the prediction accuracy raises by up to 90% in static and by 40% in dynamic RL methods.

Dates et versions

hal-04156184 , version 1 (07-07-2023)

Identifiants

Citer

Furkan Gürsoy, Mounir Haddad, Cécile Bothorel. Alignment and stability of embeddings: Measurement and inference improvement. Neurocomputing, 2023, pp.126517. ⟨10.1016/j.neucom.2023.126517⟩. ⟨hal-04156184⟩
18 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More