JamendoMaxCaps: A Large Scale Music-caption Dataset with Imputed Metadata

TitleJamendoMaxCaps: A Large Scale Music-caption Dataset with Imputed Metadata
Publication TypeConference Paper
Year of Publication2025
AuthorsRoy A., Liu R., Lu T., Herremans D.
Conference NameProceedings of IJCNN, Rome, Italy
Other NumbersarXiv:2502.07461
Abstract

We introduce JamendoMaxCaps, a large-scale music-caption dataset featuring over 200,000 freely licensed instrumental tracks from the renowned Jamendo platform. The dataset includes captions generated by a state-of-the-art captioning model, enhanced with imputed metadata. We also introduce a retrieval system that leverages both musical features and metadata to identify similar songs, which are then used to fill in missing metadata using a local large language model (LLLM). This approach allows us to provide a more comprehensive and informative dataset for researchers working on music-language understanding tasks. We validate this approach quantitatively with five different measurements. By making the JamendoMaxCaps dataset publicly available, we provide a high-quality resource to advance research in music-language understanding tasks such as music retrieval, multimodal representation learning, and generative music models.

URLhttps://arxiv.org/abs/2502.07461