Talk on music generation at Jukedeck, London
Today I gave a Lunch & Learn Seminar at Jukedeck in London. Jukedeck is a London-based tech startup that is using artificial intelligence to revolutionise the way people and companies make and consume music. In my talk I talked about the MorpheuS system and some deep learning models for modeling music that I have recently developed in collaboration with Prof. C.H Chuan.
Title:
MorpheuS: structured music generation with pattern detection and tension
Abstract:
Most music generation systems are based on statistical models and rules. A drawback of these systems is their inability to generate music with global structure or recurrent patterns. Music without long-term coherence will fail to hold the listener’s attention. In my current EU project, MorpheuS, optimisation algorithms are used to constrain the structure of generated music by incorporating pattern detection techniques. Music is then optimized to fit a specified level of tonal tension that changes dynamically throughout the piece. Deep learning methods are currently being incorporated into the system the further improve the sound quality. One of these models implements a word2vec approach on polyphonic music to capture semantic similarity through musical context. A second model uses a tonnetz representation in combination with a convolutional autoencoder, to construct an LSTM model. Both models can be used to calculate transition probabilities of generated music in the MorpheuS system.
References:
Herremans D., Chuan C.-H.. 2017. Modeling Musical Context with Word2vec. First International Workshop On Deep Learning and Music. Anchorage, US. May, 1:11-18
Herremans D., Chew E.. 2016. MorpheuS: Automatic music generation with recurrent pattern constraints and tension profiles. IEEE TENCON. Singapore, November.