Modeling Musical Context with Word2vec

This page provides the source code and audio examples used in the paper:

Herremans D., Chuan C.H. Modeling Musical Context with Word2vec. Proceedings of the International Workshop on Deep Learning and Music. Anchorage, Alaska. May 18-19, 2017.

Abstract:

We present a semantic vector space model for capturing complex polyphonic musical context. A word2vec model based on a skip-gram representation with negative sampling was used to model slices of music from a dataset of Beethoven's piano sonatas. A visualization of the reduced vector space using t-distributed stochastic neighbor embedding shows that the resulting embedded vector space captures tonal relationships, even without any explicit information about the musical contents of the slices. Secondly, an excerpt of the Moonlight Sonata from Beethoven was altered by replacing slices based on context similarity. The resulting music shows that the selected slice based on similar word2vec context also has a relatively short tonal distance from the original slice.

Original extract from the Moonlight Sonata:

Tranformed element with the word2vec model: