Highlights/Upcoming events

First workshop on deep learning and music in Anchorage (Proceedings available online)

This week (May 18-19th), I co-organized the workshop on deep learning for music with Prof. Ching-Hua Chuan in Anchorage, Alaska. The workshop was part of the International Joint Conference on Neural Networks (IJCNN) and featured invited speakers from Google Brain, A*STAR and Pandora.

Over 50 people participated in the workshop and there were some really interesting discussions on this exciting new field. The full Proceedings can be found online, and include:

Seminar at Singapore University of Technology and Design

This week I am giving a seminar at Singapore University of Technology and Design about some of my latest research, entitled: 'Machine Learning and Optimization for Cutting Edge Applications in Digital Music'.
27 April 2017 @ 10:30 am - 11:30 am
SUTD Think Tank 20 (Building 2, Level 3)
8 Somapah Road, Singapore 487372
Category: Seminar Series ISTD


An online system for visualising audio-and score based features

A few weeks ago, Prof. Ching-Hua Chuan (University of North Florida) presented a system that we have developed together called IMMA at the IEEE International Conference on Semantic Computing in San Diego. IMMA is a multi-modal interface that shows both audio and score-based features of a performance and score in sync. The current version implements a module for tension, different modules are expected to be implemented soon. Check out a demo or read more.

Workshop on Deep Learning and Music

International Workshop on Deep Learning for Music
In conjunction with the 2017 International Joint Conference on Neural Networks
(IJCNN 2017))

14-19 May (1 day), Anchorage
Read more

There has been tremendous interest in deep learning across many fields of study. Recently, these techniques have gained popularity in the field of music. Projects such as Magenta (Google's Brain Team's music generation project), Jukedeck and others testify to their potential.

Research visit to Singapore at A*STAR and IEEE TENCON

Last week I was in Singapore to present the MorpheuS research I have performed on my Marie-Curie grant at IEEE TENCON. The title of my talk was "MorpheuS: Automatic music generation with recurrent pattern constraints and tension profiles". I was also invited to give a seminar on "Machine learning and optimization applied to digital music" at the High Performance Computing Institute at A*STAR in Singapore. A short abstract of the talk at A*STAR:

PhD thesis: Compose=Compute - Computer Generation And Classification Of Music Through Operations Research Methods

In 2014 I defended my PhD thesis, Compose=Compute - Computer Generation And Classification Of Music Through Operations Research Methods, at the Unviversity of Antwerp under the supervision of Prof. dr. Kenneth Sörensen. In the research, I have developed automatic music generation systems using metaheuristic optimization techniques, combined with rules and machine learning. Other topics included composer classification, and Android app that generates music in the style of a chosen (or mixed) composer's style, and an app that does dance hit prediction based on an audio file.


ZiF Workshop From Computational Creativity to Creativity Science

This week I am attending the ZiF Workshop in Bielefeld, Germany. This interesting workshop is bringing together both computer scientists and cognitive scientists to come to a joint view and approach on computational creativity.

From Computational Creativity to Creativity Science

Date: 19 - 22 September 2016
Convenors: Kai-Uwe Kühnberger (Osnabrück, GER), Emilios Cambouropoulos (Thessaloniki, GRE), Oliver Kutz (Bozen, ITA)

Tension models in music generation at ICMPC

This week, I'm in San Francisco for the ICMPC conference. There are many interesting topics addressed in this bi-annual conference on music perception and cognition, including cognitive models of music and Music & health. My talk presented the model for tonal tension and the resulting music generation system MorpheuS that I have developed together with Elaine Chew at the Centre for Digital Music in London.

Talk and performance of MorpheuS Hadyn at TENOR, Cambridge (UK)

Today I gave a talk at TENOR, The Second International Conference on Technologies for Music Notation and Representation in Cambridge (UK). There were many interesting session on the latest music represenatation technologies. My presentation focused on the tonal tension model, which I have developed together with Elaine Chew, and how this is implemented in the polyphonic music generation algorithm called MorpheuS.

Invited seminar at IRCAM, Paris about the MorpheuS music generation project

I will be in Paris next week to give an invited seminar on the MorpheuS music generation project at l'institut de recherche et coordination accoustique/musique (IRCAM) in Paris. Read the full announcement on Ircam's website

Title: Morphing Music According to a Long-term Tension Profile and Detected Patterns

When: Wednesday 20th April, 2016 at 12h
Where: Ircam, salle Stravinsky

Talk at ORBEL30 At the Catholic University of Louvain (UCL)

I'll be giving a talk on "Music generation with structural constraints: an operations research approach" at ORBEL30, the national conference of the SOGESCI-BVWB, the Belgian Operational Research (OR) Society, Member of EURO, the association of European OR Societies, and Belgian representative of IFORS (International Federation of OR Societies). The conference is hosted by the Catholic University of Leuven (UCL) at Louvain-la-Neuve on January 27th and 28th.

Vice.com reports on dance hit prediction model

The magazine Vice.com reported on the research I conducted with David Martens and Kenneth Sörensen at the University of Antwerp on dance hit prediction.

"Hit songs are getting so predictable. No, literally. The recipe for what makes a pop or dance song a hit has apparently become so formulaic, a computer algorithm can predict with above-average accuracy the likelihood that a song will top the charts."

Seminar at Unviversidad Carlos III de Madrid

I'll be giving a seminar for PhD students at the Department of Computer Science (Research Group SCALAB) of the University Carlos III de Madrid next Friday. The topic will be on how to combine music and operations research. From their website:

Title: Music and operations research: applications in automatic generation music and dance hit prediction.

Presenter: Dorien Herremans (Queen Mary University)


Talk at Imperial College London on generating structured music with local search optimisation and machine learning

I'll be giving a talk on generating structured music with local search optimisation and machine learning. The seminar is organised by the Department of Computing, at Imperial College London and will take place on the 14th of October at 15:30.

Seminar talk
Department of Computing, Imperial College London

Generating structured music with local search optimisation and machine learning
Dorien Herremans, PhD (MSCA Fellow) Queen Mary University of London


Talk at Harvey-Mudd College

On my way through Southern California, I was invited by Prof. Robert Keller, who works on the impro-visor, to give a talk at Harvey-Mudd College in Claremont, CA on July 23rd. The topic of the talk was focused on my Mary-Curie project (MorpheuS): "An automatic composition system for structured music based on optimisation and machine learning".