Highlights/Upcoming events
New MOE Tier 2 grant on music generation with emotion that matches video
Posted by dorien on Tuesday, 26 February 2019Having done my postdoc and PhD on music generation (see MorpheuS project), I am happy to announce that I am the PI of a new MOE Tier 2 grant on 'aiMuVi: AI Music generated from Videos' with SGD 648,216 in funding. My co-PIs on this project are Prof. Gemma Roig from SUTD, Prof. Eran Gozy from MIT (creator of Guitar Hero), and collaborator Prof. Kat Agres from A*STAR/NUS.
Abstract of the proposal:
Interview for wearesutd: LEADING WOMEN IN TECH & DESIGN: At the cutting edge of music and AI
Posted by dorien on Wednesday, 16 January 2019Are you a woman considering a career in technology? I talked to wearesutd about being a woman the field for audio/music and AI. Read the full article
New paper on Singing Voice Estimation in Neural Computing and Applications (Springer)
Posted by dorien on Friday, 7 December 2018Together with Eward Lin, Enyan Koh, Dr. Balamurali BT from SUTD, and Dr. Simon Lui from Tencent Music (former SUTD) we published a paper on using an ideal binary mask with CNN for separating singing voice from its musical accompaniment:
Lin K.W.E., BT B, Koh E., Lui S., Herremans D.. In Press. Singing Voice Separation Using a Deep Convolutional Neural Network Trained by Ideal Binary Mask and Cross Entropy. Neural Computing and Applications. DOI: 10.1007/s00521-018-3933-z.
New publication on modeling music with word2vec in Springer's Neural Computing and Applications
Posted by dorien on Thursday, 29 November 2018Together with Prof Ching-Hua Chuan from the University of Miami and Prof. Kat Agres from IHPC, A*STAR, I've just published a new article on 'From Context to Concept: Exploring Semantic Relationships in Music with Word2Vec', in Springer's Neural Computing and Applications (impact factor 4.213). The article describes how we can use word2vec to model complex polyphonic pieces of music using the popular embeddings model. The preprint is available here.
New Frontiers in Psychology paper on A Novel Graphical Interface for the Analysis of Music Practice Behaviors
Posted by dorien on Thursday, 29 November 2018The paper I wrote together with Janis Sokolovskis and Elaine Chew from QMUL, called A Novel Interface for the Graphical Analysis of Music Practice Behaviours was just poublished in Frontiers in Psychology - Human-Media Interaction. Read the full article here or download the pdf.
Grant from MIT-SUTD IDC on "An intelligent system for understanding and matching perceived emotion from video with music"
Posted by dorien on Wednesday, 21 November 2018A few months ago, Prof. Gemma Roig (PI, SUTD), Prof. Dorien Herremans (co-PI, SUTD), Dr. Kat Agres (co-PI, A*STAR) and Dr. Eran Gozy (co-PI, MIT, creator of Guitar Hero) got awarded a new grant from the International Design Center (joint research institute of MIT and SUTD) for 'An intelligent system for understanding and matching perceived emotion from video with music'. This is an exiting opportunity and the birth of our new Affective Computing Lab at SUTD that links the computer vision lab and AMAAI lab.
How to make Halloween scarier with music...
Posted by dorien on Thursday, 1 November 2018My colleague, Dr. Kat Agres from the Institute of High Performance Computing, A*STAR explains all about music and how it can make us feel scared... Link to video.
New grant from MIT-SUTD IDC on an AR Game for Climate Change Awareness
Posted by dorien on Friday, 26 October 2018Prof. Lucienne Blessing (co-PI, SUTD), Prof. Lynette Chea (Co-PI, SUTD) and Prof. Takehiko Nagakura (co-PI, MIT) and myself (PI) have just been awarded a grant from the International Design Center (MIT-SUTD). The topic of the grant is to build an Augmented Reality Game to stimulate Climate Change Awareness and support Project P280 within SUTD. The game design will be done by SUTD Game Lab.
Channel News Asia documentary on MorpheuS music generation algorithm
Posted by dorien on Thursday, 18 October 2018My research was featured in a documentary called 'Algorithms' by Channel News Asia. MorpheuS is a music generation algorithm developed by Prof. Elaine Chew and myself at Queen Mary University of London. In the documentary, a string quartet consisting of musicians from the Singapore Symphony Orchestra perform a piece composed by MorpheuS (at around 17min in), followed by an interview about my research (21min in).
Episode 1: Rage Against The Machine
Talk at One-North Festival on Music and AI: Generating Music and More!
Posted by dorien on Monday, 3 September 2018I'll be giving an invited talk on Music and AI: Generating Music and More!, coming Friday at the One-North Festival. The one-north Festival is an annual celebration of research, innovation, creativity and enterprise. Jointly organised by the Agency for Science, Technology and Research (A*STAR) and JTC, and supported by Science Centre Singapore, this very exciting event welcomes the public to immerse in science and technology through fun interactive displays, inspiring talks, workshops and tours for the whole family.
PhD defence of Nathalie Angus - Real-Time Binaural Auralization - August 1st
Posted by dorien on Monday, 16 July 2018I'm very happy to announce that Natalie Angus will be doing her PhD defence on Real-Time Binaural Auralization on August 1st. Natalie is supervised by myself and Dr. Simon Lui (Director of Tencent Music). Download and read her thesis here. Some information about the seminar:
Title: Real-Time Binaural Auralization
When: 15:00, August 1st, 2018.
Where: Think Thank 20 (Building 2, Level 3), SUTD
Postdoc & Eng. job opening in affective computing for video/music. MIT-SUTD IDC center Singapore
Posted by dorien on Friday, 8 June 2018I am excited to announce the following job openings for a new SUTD-MIT Project on emotion recognition in video and audio, start date is as soon as possible.
We are seeking a postdoctoral fellow and a research engineer to work on an affective computing research project related to emotion recognition for videos and music. The PIs of the project include Prof. Gemma Roig, Prof. Dorien Herremans, and Dr. Kat Agres. The position is for one year extendable to 2 years.
Upcoming talk at the Women in Data Science (WiDS) event
Posted by dorien on Friday, 23 March 2018I am excited to give a talk at the upcoming Data Science Women in Data Science (WiDS) 2018 event taking place on 11 April 2018 at Singapore University of Technology & Design (SUTD). This is part of the growing global WiDS community and conference series, taking place in 150+ locations around the world coinciding with the WiDS Stanford on 5 March 2018. #sheinnovates
Guest lecture at NUS on music modelling
Posted by dorien on Thursday, 22 March 2018Today, prof. Wang Ye, general chair of this year's ISMIR, was nice enough to invite me to teach a guest lecture in his Sound and Music Computing class at NUS. The class was introduced by a lecture of Prof. Lonce Wyse, who talked about sound modelling with machine learning. In my talk, we focused on music modelling from a generative perspective. For those interested, you can download my slides here.
Video on MorhpeuS - Music generation with patterns and tension profile
Posted by dorien on Monday, 5 March 2018In this short video, Prof. Elaine Chew and I briefly talk about the EU Marie-Curie project called MorpheuS that I completed at Queen Mary University of London.
Talk at first music research symposium, Singapore
Posted by dorien on Sunday, 4 March 2018In order to bring the music research community in Singapore together, the Music Cognition group at IHPC, A*STAR under the lead of Dr. Kat Agres, organized the first Singaporean Music Research Symposium on Feb. 2nd. With over 70 participants, the day was a true success.
How convolution works - Certified NVIDIA Deep Learning Insititute Ambassador Instructor
Posted by dorien on Thursday, 14 December 2017I'm happy to announce that I just got certified as an official NVIDIA Deep Learning Institute instructor and ambassador.
"The NVIDIA Deep Learning Institute (DLI) offers hands-on training for developers, data scientists, and researchers looking to solve the world’s most challenging problems with deep learning.
New article on the MoprheuS music generation system in IEEE Transactions of Affective Computng
Posted by dorien on Tuesday, 5 December 2017I'm please to announce the latest article I wrote together with Prof. Elaine Chew on MorpheuS, published in IEEE Transactions on Affective Computing. The paper explains the inner workings of MorpheuS, a music generation system that is able to generate pieces with a fixed pattern structure and given tension.
Herremans D., Chew E.. 2017. MorpheuS: generating structured music with constrained patterns and tension. IEEE Transactions on Affective Computing. PP (In Press)(99)
Nicolas Froment visited SUTD to talk about Musescore (video)
Posted by dorien on Thursday, 23 November 2017One of the co-founders of my favourite music notation software (Musescore) visited Singapore last week to give a COIL seminar entitled: 'MuseScore: Inside A Successful Open Source Project For Musicians'. For those of you who missed it, you can see a recorded version of the talk here: