Predicting spotify audio features from Last.fm tags
Résumé
Music information retrieval (MIR) is an interdisciplinary research field that focuses on the extraction, processing, and knowledge discovery of information contained in music. While previous studies have utilized Spotify audio features and Last.fm tags as input values for classification tasks, such as music genre recognition, their potential as target values has remained unexplored. In this article, we address this notable gap in the research landscape by proposing a novel approach to predict Spotify audio features based on a set of Last.fm tags. By predicting audio features, we aim to explore the relationship between subjective perception and concrete musical features, shedding light on patterns and hidden correlations between how music is perceived, consumed, and discovered. Additionally, the predicted audio features can be leveraged in recommendation systems to provide users with explainable recommendations, bridging the gap between algorithmic suggestions and user understanding. Our experiments involve training models such as GPT-2, XGBRegressor, and Bayesian Ridge regressor to predict Spotify audio features from Last.fm tags. Through our findings, we contribute to the advancement of MIR research by demonstrating the potential of Last.fm tags as target values and paving the way for future research on the connection between subjective and objective music characterization. Our approach holds promise for both listeners and researchers, offering new insights into the intricate relationship between perception and audio signal in music.
Domaines
Informatique [cs]
Fichier principal
Predicting_Music_Sound_Features_from_Last_fm_Tags_preprint.pdf (471.01 Ko)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|