The Chordinator: Chord progression modeling and generation using transformers
Résumé
This paper presents a transformer model trained with a large dataset of chord sequences. The dataset includes several styles, such as jazz, rock, pop, blues, or music for cinema. We apply three consecutive tokenization/encoding strategies: 1) All chords are treated as unique elements. 2) Chords dynamically formatted as a tuple describing roots, nature, extensions, and slash chords. 3) An extension of model 2 with a style token and extension of the positional embedding layer of the transformer architecture. We analyze sequences generated by comparing them with the training dataset using trigram, which reveals common chord progressions and source duplications. We compare the generated sequences from a musical perspective, rating their plausibility in regard to the training data. The third strategy reported lower validation loss and better musical consistency in the suggested progressions.
Origine | Fichiers produits par l'(les) auteur(s) |
---|---|
Licence |