Leveraging LLM Embeddings for Cross Dataset Label Alignment and Zero Shot Music Emotion Prediction

Do you listen to music when you are down? Emotion and music are intrinsically connected. Yet we still struggle to model this. Why?

One of the reasons is that we do only have a handful of small datasets, each using a different set of emotion labels. The AMAAI lab set out to overcome this by developing a zero shot alignment method that is able to merge different datasets using LLM embeddings.

Paper: https://arxiv.org/abs/2410.11522
Github: https://github.com/AMAAI-Lab/cross-dataset-emotion-alignment

Authors: Renhang Liu, Abhinaba Roy, Ph.D., Dorien Herremans

Title: Leveraging LLM Embeddings for Cross Dataset Label Alignment and Zero Shot Music Emotion Prediction

Abstract:
In this work, we present a novel method for music emotion recognition that leverages Large Language Model (LLM) embeddings for label alignment across multiple datasets and zero-shot prediction on novel categories. First, we compute LLM embeddings for emotion labels and apply non-parametric clustering to group similar labels, across multiple datasets containing disjoint labels. We use these cluster centers to map music features (MERT) to the LLM embedding space. To further enhance the model, we introduce an alignment regularization that enables dissociation of MERT embeddings from different clusters. This further enhances the model's ability to better adaptation to unseen datasets. We demonstrate the effectiveness of our approach by performing zero-shot inference on a new dataset, showcasing its ability to generalize to unseen labels without additional training.