Presenting MidiCaps and MIRFLEX at ISMIR in San Francisco

Exciting news from the AMAAI Lab at this year’s ISMIR conference in San Francisco! We were thrilled to showcase some of our research:

MidiCaps
Presented by Jan Melechovsky and Abinaba Roy, MidiCaps is the first large-scale open midi dataset with text captions. This resource will enable us to develop the very first text-to-midi models (stay tuned -- our lab's model is coming soon!).

- Dataset: https://huggingface.co/datasets/amaai-lab/MidiCaps
- Paper: https://arxiv.org/abs/2406.02255
- GitHub: https://github.com/AMAAI-Lab/MidiCaps

MIRFLEX
Abinaba Roy and Anuradha Chopra also presented MIRFLEX, a collaborative library designed for music feature extraction. This tool is meant to aid researchers and developers in music information retrieval (MIR).

- Paper: https://arxiv.org/abs/2411.00469
- Github: https://github.com/AMAAI-Lab/mirflex

large_mdicaps2.jpeg