MorpheuS

MorpheuS is an automatic music generation system, which I am developing as part of my Marie-Sklodowska Curie individual postdoc fellowship, on which I am current working with Elaine Chew. The full title of the project Is: MorpheuS: Hybrid Machine Learning – Optimization techniques To Generate Structured Music Through Morphing And Fusion.

MorpheuS currently implements pattern detection algorithms and a tonal tension model to constrain polyphonic music generation. The software for the tension models is available here. The details of the algorithm are described in:

Herremans, D, Chew, E. 2016. MorpheuS: Automatic music generation with recurrent pattern constraints and tension profiles. Proceedings of IEEE TENCON, Singapore. In press

The MorpheuS Haydn 110 piece was recently featured in the PRISM concert series in Stanford, see the announcement in The Examiner and at the With/Without concert in London. The resulting lecture talk given by Elaine Chew is available below (MorpheuS starts at 23:00):

PRISM Series. Elaine Chew: Stolen from Elaine Chew on Vimeo.

The full sheet of the generated Haydn 110 piece:

MorpheuS Haydn 110 by dorien

The MorhpeuS Bach score:

MorpheuS Bach by dorien

Project summary: State-of-the-art music generation systems (Continuator, OMax, Mimi) produce music that sounds good on a note-to-note level but lacks critical structure/direction necessary for long term coherence. To tackle this problem, we propose to generate compositions based on structural templates at varying hierarchical levels. Our novel approach deploys machine-learning methods in an optimization context to morph existing pieces into new ones and to fuse different styles. We aim to develop a framework that combines machine learning techniques that learn style, with a powerful optimization method, the variable neighbourhood search (VNS) algorithm, for generating music. This approach allows the learned model to incorporate a wide variety of constraints, including those for preserving long term coherence and structure. It promises to effect a step-change in automatic music generation by moving the field in the new direction of generating structured music using hybrid machine learning-optimization techniques. The applicant is an operations researcher-musician, ideal for this work. A first step combines her VNS music generation algorithm with machine learning methods to ensure proper style evaluation. In previous work, the applicant has shown that VNS outperforms genetic algorithms when generating counterpoint with a rule-based objective function. In a preliminary study, the applicant has demonstrated the effectiveness of using machine learning techniques as evaluation metrics for optimisation methods. The applicant has extensive web development experience; to reach the widest possible audience, the resulting system will be made available in an interactive website where users can morph and fuse musical pieces. This work is situated in the area of digital media, with a European consumer expenditure of over €33 billion in 2011, projected to increase. Music generation in digital music has direct applications in game music, interactive arts, and stock-music for advertising/videos.