- Research assistant / postdoc jobs in Music/Audio and AI
- New article in Sensors: Deep Neural Network-Based Respiratory Pathology Classification Using Cough Sounds
- aiSTROM -- A roadmap for developing a successful AI strategy
- Book chapter on Musical stylometry: Characterisation of music
- Joint internship with Sounders Music
- Meet My Lab - podcast from Euraxess
- New roadmap paper on the role of music technology for health care and well-being
- Three IJCNN papers from the AMAAI lab this year!
- Keynote at DMRN on controllable music generation
- New paper on Underwater Acoustic Communication Receiver Using Deep Belief Network
I'm happy to announce that I just got certified as an official NVIDIA Deep Learning Institute instructor and ambassador.
"The NVIDIA Deep Learning Institute (DLI) offers hands-on training for developers, data scientists, and researchers looking to solve the world’s most challenging problems with deep learning.
I'm please to announce the latest article I wrote together with Prof. Elaine Chew on MorpheuS, published in IEEE Transactions on Affective Computing. The paper explains the inner workings of MorpheuS, a music generation system that is able to generate pieces with a fixed pattern structure and given tension.
Herremans D., Chew E.. 2017. MorpheuS: generating structured music with constrained patterns and tension. IEEE Transactions on Affective Computing. PP (In Press)(99)
One of the co-founders of my favourite music notation software (Musescore) visited Singapore last week to give a COIL seminar entitled: 'MuseScore: Inside A Successful Open Source Project For Musicians'. For those of you who missed it, you can see a recorded version of the talk here:
A new journal article was published that I wrote together with Prof. Ching-Hua Chuan and Prof. Elaine Chew. The article is a survey on current music generation systems from a functional point of view, thus creating a nice overview of current challenges and opportunities in the field. The articles covers systems ranging from game music, to real-time improvisation systems and emotional movie music generation systems.
Last week the National University of Singapore hosted the International Society for Music Information Retrieval conference in lovely Suzhou, China. It featured a ton of interesting presentations by established academics in the field including Prof. Elaine Chew (who also talked about MorpheuS), Roger Dannenberg and others; but also industry leaders such as Jeffrey C. Smith (Smule) and E. Humprey (Spotify).
Update: while this particular grant has expired, there are some other opportunities available in my lab.
I am looking for a strong PhD candidate in music and machine learning at Singapore University of Technology and Design (SUTD). SUTD is a relatively new university, founded in collaboration with MIT, that has a strong interdisciplinary focus on design. The available PhD position is at the department of Information Systems Technology and Design (ISTD).
Last week, I started as an Assistant Professor at the Information Systems and Technology pillar at Singapore University of Technology and Design (SUTD), co-founded by MIT.
"The Singapore University of Technology and Design (SUTD) is the fourth autonomous university to be established in Singapore. SUTD's mission is to advance knowledge and nurture technically grounded leaders and innovators to serve societal needs...
I am pleased to announce that I have been elevated to the rank of Senior Member of the Institute of Electrical and Electronics Engineers (IEEE). Only 8 percent of IEEE members have attained the level of senior member. Senior Membership is an honor bestowed only to those who have made significant contributions to the profession. They are recognised for their technical and professional excellence, achievements, publications and course development or technical direction in IEEE-designated fields. I am particularly active in the IEEE Computational Intelligence Society.
Today I gave a Lunch & Learn Seminar at Jukedeck in London. Jukedeck is a London-based tech startup that is using artificial intelligence to revolutionise the way people and companies make and consume music. In my talk I talked about the MorpheuS system and some deep learning models for modeling music that I have recently developed in collaboration with Prof. C.H Chuan.
This week (May 18-19th), I co-organized the workshop on deep learning for music with Prof. Ching-Hua Chuan in Anchorage, Alaska. The workshop was part of the International Joint Conference on Neural Networks (IJCNN) and featured invited speakers from Google Brain, A*STAR and Pandora.
Over 50 people participated in the workshop and there were some really interesting discussions on this exciting new field. The full Proceedings can be found online, and include:
This week I am giving a seminar at Singapore University of Technology and Design about some of my latest research, entitled: 'Machine Learning and Optimization for Cutting Edge Applications in Digital Music'.
27 April 2017 @ 10:30 am - 11:30 am
SUTD Think Tank 20 (Building 2, Level 3)
8 Somapah Road, Singapore 487372
Category: Seminar Series ISTD
A few weeks ago, Prof. Ching-Hua Chuan (University of North Florida) presented a system that we have developed together called IMMA at the IEEE International Conference on Semantic Computing in San Diego. IMMA is a multi-modal interface that shows both audio and score-based features of a performance and score in sync. The current version implements a module for tension, different modules are expected to be implemented soon. Check out a demo or read more.
International Workshop on Deep Learning for Music
In conjunction with the 2017 International Joint Conference on Neural Networks
14-19 May (1 day), Anchorage
There has been tremendous interest in deep learning across many fields of study. Recently, these techniques have gained popularity in the field of music. Projects such as Magenta (Google's Brain Team's music generation project), Jukedeck and others testify to their potential.
Last week I was in Singapore to present the MorpheuS research I have performed on my Marie-Curie grant at IEEE TENCON. The title of my talk was "MorpheuS: Automatic music generation with recurrent pattern constraints and tension profiles". I was also invited to give a seminar on "Machine learning and optimization applied to digital music" at the High Performance Computing Institute at A*STAR in Singapore. A short abstract of the talk at A*STAR:
In 2014 I defended my PhD thesis, Compose=Compute - Computer Generation And Classification Of Music Through Operations Research Methods, at the Unviversity of Antwerp under the supervision of Prof. dr. Kenneth Sörensen. In the research, I have developed automatic music generation systems using metaheuristic optimization techniques, combined with rules and machine learning. Other topics included composer classification, and Android app that generates music in the style of a chosen (or mixed) composer's style, and an app that does dance hit prediction based on an audio file.
This week I am attending the ZiF Workshop in Bielefeld, Germany. This interesting workshop is bringing together both computer scientists and cognitive scientists to come to a joint view and approach on computational creativity.
From Computational Creativity to Creativity Science
Date: 19 - 22 September 2016
Convenors: Kai-Uwe Kühnberger (Osnabrück, GER), Emilios Cambouropoulos (Thessaloniki, GRE), Oliver Kutz (Bozen, ITA)
This week, I'm in San Francisco for the ICMPC conference. There are many interesting topics addressed in this bi-annual conference on music perception and cognition, including cognitive models of music and Music & health. My talk presented the model for tonal tension and the resulting music generation system MorpheuS that I have developed together with Elaine Chew at the Centre for Digital Music in London.