EmoMV: Affective Music-Video Correspondence Learning Datasets for Classification and Retrieval

TitleEmoMV: Affective Music-Video Correspondence Learning Datasets for Classification and Retrieval
Publication TypeJournal Article
Year of Publication2022
AuthorsPham Q-H, Herremans D., Roig G.
JournalInformation Fusion
Abstract

Studies in affective audio-visual correspondence learning require ground-truth data to train, validate, and test models. The number of available datasets together with benchmarks, however, is still limited. In this paper, we create a collection of three datasets (called EmoMV) for affective correspondence learning between music and video modalities. The first two datasets (called EmoMVA, and EmoMV-B, respectively) are constructed by making use of music video segments from other available datasets. The third one called EmoMV-C is created from music videos that we self-collected from YouTube. The music-video pairs in our datasets are annotated as matched or mismatched in terms of the emotions they are conveying. The emotions are annotated by humans in the EmoMV-A dataset, while in the EmoMV-B and EmoMV-C datasets they are predicted using a pretrained deep neural network. A user study is carried out to evaluate the accuracy of the “matched” and “mismatched” labels offered in the EmoMV dataset collection. In addition to creating three new datasets, a benchmark deep neural network model for binary affective music-video correspondence classification is also proposed. This proposed benchmark model is then modified to adapt to affective music-video retrieval. Extensive experiments are carried out on all three datasets of the EmoMV collection. Experimental results demonstrate that our proposed model outperforms state-of-the-art approaches on both the binary classification and retrieval tasks. We envision that our newly created dataset collection together with the proposed benchmark models will facilitate advances in affective computing research.

Dataset available at: https://zenodo.org/record/7011072#.YzpFoqTmgzZ

URLhttps://www.sciencedirect.com/science/article/abs/pii/S1566253522001725
DOI10.1016/j.inffus.2022.10.002