IMMA - An online system for visualising audio-and score based features

IMMA is an interactive multimodal music analysis framework developed by Prof. dr. Ching-Hua and Prof. dr. Dorien Herremans. It was first presented at the IEEE International Conference on Semantic Computing in San Diego, and an extension module for valence/arousal emotion display was presented at the Audio Mostly conference in London.

The current version implements a module for both tension and arousal/valence data, different modules are expected to be implemented soon.

Try out a demo for (tension)
or arousal/valence.
Examples of alignment from the paper

Herremans D., Chuan C.-H.. 2017. A multi-modal platform for semantic music analysis: visualizing audio- and score-based tension. 11th International Conference on Semantic Computing IEEE ICSC 2017. San Diego, January, 2017.

and

Herremans D., Yang S., Chuan C.-H., Barthet M., Chew E.. 2017. IMMA-Emo: A Multimodal Interface for Visualising Score- and Audio-synchronised Emotion Annotations. Audio Mostly. ACM. London, UK, August, 2017.

Contact me if you are interested in developing a new module for IMMA.

Abstract:

Abstract—Musicologists, music cognition scientists and others have long studied music in all of its facets. During the last few decades, research in both score and audio technology has opened the doors for automated, or (in many cases) semi-automated analysis. There remains a big gap, however, between the field of audio (performance) and score-based systems. In this research, we propose a web-based Interactive system for Multi-modal Music Analysis (IMMA), that provides musicologists with an intuitive interface for a joint analysis of performance and score. As an initial use-case, we implemented a tension analysis module in the system. Tension is a semantic characteristic of music that directly shapes the music experience and thus forms a crucial topic for researchers in musicology and music cognition. The module includes methods for calculating tonal tension (from the score) and timbral tension (from the performance). An audio-to-score alignment algorithm based on dynamic time warping was
implemented to automate the synchronization between the audio
and score analysis. The resulting system was tested on three performances (violin, flute, and guitar) of Paganini’s Caprice No. 24 and four piano performances of Beethoven’s Moonlight Sonata. We statistically analyzed the results of tonal and timbral tension and found correlations between them. A clustering algorithm was implemented to find segments of music (both within and between performances) with similar shape in their tension curve. These similar segments are visualized in IMMA. By displaying selected audio and score characteristics together with musical score following in sync with the performance playback, IMMA offers a user-friendly intuitive interface to bridge the gap between audio and score analysis.

Acknowledgement
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 658914.