MorpheuS is an automatic music generation system, which was developed together by Prof. Dorien Herremans and Prof. Elaine Chew as part of Dr. Herremans' Marie-Sklodowska Curie individual postdoc fellowship. The full title of the project Is: MorpheuS: Hybrid Machine Learning – Optimization techniques To Generate Structured Music Through Morphing And Fusion.
MorpheuS morphs a template piece into a newly generated piece with long term structure and a predefined emotional content. Full details are available in the following scientific papers:
A short video on how it works:
Project summary: State-of-the-art music generation systems (Continuator, OMax, Mimi) produce music that sounds good on a note-to-note level but lacks critical structure/direction necessary for long term coherence. To tackle this problem, we propose to generate compositions based on structural templates at varying hierarchical levels. Our novel approach deploys machine-learning methods in an optimization context to morph existing pieces into new ones and to fuse different styles. We aim to develop a framework that combines machine learning techniques that learn style, with a powerful optimization method, the variable neighbourhood search (VNS) algorithm, for generating music. This approach allows the learned model to incorporate a wide variety of constraints, including those for preserving long term coherence and structure. It promises to effect a step-change in automatic music generation by moving the field in the new direction of generating structured music using hybrid machine learning-optimization techniques. The applicant is an operations researcher-musician, ideal for this work. A first step combines her VNS music generation algorithm with machine learning methods to ensure proper style evaluation. In previous work, the applicant has shown that VNS outperforms genetic algorithms when generating counterpoint with a rule-based objective function. In a preliminary study, the applicant has demonstrated the effectiveness of using machine learning techniques as evaluation metrics for optimisation methods. The applicant has extensive web development experience; to reach the widest possible audience, the resulting system will be made available in an interactive website where users can morph and fuse musical pieces. This work is situated in the area of digital media, with a European consumer expenditure of over €33 billion in 2011, projected to increase. Music generation in digital music has direct applications in game music, interactive arts, and stock-music for advertising/videos.
A selection of short pieces played by Prof. Elaine Chew. See the full playlist on YouTube here.
This set of 3+3 pieces by MorpheuS was presented for the first time at the Partnerships concert, 23 May 2017, curated by Bob Sturm and Oded Ben-Tal. The music was recorded live by Ebenezer Acquah at St Dunstan and All Saints Church in Stepney, London and performed by Elaine Chew:
The MorpheuS Haydn 110 piece was recently featured in the PRISM concert series in Stanford, see the announcement in The Examiner and at the With/Without concert in London. The resulting lecture talk given by Elaine Chew is available below (MorpheuS starts at 23:00):
MorpheuS was featured in a documentary called 'Algorithms' by Channel News Asia. MorpheuS is a music generation algorithm developed by Prof. Elaine Chew and myself at Queen Mary University of London. In the documentary, a string quartet consisting of musicians from the Singapore Symphony Orchestra perform a piece composed by MorpheuS (at around 17min in), followed by an interview about my research (21min in).
Algorithms are a new digital species in our world, streamlining every aspect of our lives. But we are only just beginning to question their control - is it too late to rage against the machine?
About the show:
There are quiet codes weaved into the fabric of modern life, silently crunching mountains of big data, and helping us solve problems based on the results they derive. They are known as Algorithms - complex webs of code that can determine anything. From whether you should watch this show, to telling police departments if you're going to commit a crime, and where. But can math, and the computers that calculate it, be wrong?
Dorien Herremans is an Assistant Professor at Singapore University of Technology and Design, and has a joint-appointment at the Institute of High Performance Computing, A*STAR. Before that she was a Marie Sklodowska-Curie Postdoctoral Fellow at the Centre for Digital Music at Queen Mary University of London, where she worked on the project: ``MorpheuS: Hybrid Machine Learning – Optimization techniques To Generate Structured Music Through Morphing And Fusion''. Dr. Herremans received her Ph.D. in Applied Economics on the topic of Computer Generation and Classification of Music through Operations Research Methods. She graduated as a commercial engineer in management information systems at the University of Antwerp in 2005. After that, she worked as a Drupal consultant and was an IT lecturer at the Les Roches University in Bluche, Switzerland. Dr. Herremans' research interests include novel applications in the intersection of machine learning/optimization/AI and music. She was co-organizer of the First International Workshop on Deep Learning for Music, joint with IJCNN.
Elaine Chew is Professor of Digital Media in the School of Electronic Engineering and Computer Science at Queen Mary University of London (QMUL) where she is affiliated with the Centre for Digital Music. Prior to joining QMUL in 2011, she was a tenured associate professor at the University of Southern California, where she was the inaugural holder of the Viterbi Early Career Chair. She was a recipient of the US Presidential Early Career Award in Science and Engineering and NSF CAREER Award, and the Edward, Frances, and Shirley B. Daniels Fellow at the Radcliffe Institute for Advanced Study. She is also an alum of the NAS Kavli Frontiers of Science and NAE Frontiers of Engineering Symposia. Her research centers on the mathematical and computational modeling of music structure, musical prosody, music cognition, and ensemble interaction. She is author of over 100 peer-reviewed chapters and articles, and author and editor of 8 books and journal special issues on music and computing. She has served as program and general chair of the International Conferences on Music Information Retrieval (2008) and of Mathematics and Computation in Music (2009, 2015), and was invited convenor of the Mathemusical Conversations international workshop in 2015. She was awarded PhD and SM degrees in operations research at the Massachusetts Institute of Technology, and a BAS in mathematical and computational sciences (hon) and music (distinction) at Stanford University