Highlights/Upcoming events

Harmonic structure and altered states in trance music - new Oxford book chapter

Together with Dr. Kat Agres (NUS, Singapore) and Prof. Louis Bigo (University of Lille, France), I recently explored how harmonic structure influences altered states in uplifting trance music. "The Impact of Musical Structure on Enjoyment and Absorptive Listening States in Trance Music" is available as a chapter in Music and Consciousness II, a book published by Ruth Herbert, Eric Clarke and David Clarke.

Talk on deep belief networks for doppler invariant demodulation - IEEE APWCS

PhD student Abigail Leon from the AMAAI lab presented a paper at the 16th IEEE Asia Pacific Wireless Communications Symposium (APWCS) on "Doppler Invariant Demodulation for Shallow Water Acoustic Communications Using Deep Belief Networks".

New paper on multimodal emotion prediction models from video and audio

Just published a new article with my PhD student Thao Ha Thi Phuong and Prof. Gemma Roig on 'Multimodal Deep Models for Predicting Affective Responses Evoked by Movies'. The paper will be published in the proceedings of the 2nd International Workshop on Computer Vision for Physiological Measurement as part of ICCV; and will be presented by Thao in Seoul, South Korea. Anybody interested can download the preprint article here (link coming soon!). The source code of our model is available on github.

IEEE Conference on Games - talk on music game for cognitive and physical wellbeing for elderly

Today I gave a talk at the IEEE Conference on Games at Queen Mary University of London. The prototype game was developed as part of a UROP project led by Prof. Kat Agres (NUS), Prof. Simon Lui (Tencent), and myself (SUTD). Credits to the bulk of the development goes to Xuexuan Zhou!

The full game is described in our proceedings paper and the slides are available here:

Talk at Cognitive Science Conference in Montreal

The Cognitive Science Conference (CogSci) was held at Montreal, Canada this year. I presented a publication-based talk on 'Towards emotion based music generation: A tonal tension model based on the spiral array', which was based on a lot of the work done during my postdoc fellowship with Prof. Elaine Chew at QMUL (download short paper, see original full papers).

Postdoc fellow on music generation with emotion - opening

Our team at Singapore University of Technology and Design (SUTD) is looking for a postdoc fellow in automatic music generation. You will be joining our team in music/audio/vision AI supervised by Prof. Dorien Herremans, Prof. Gemma Roig and Prof. Kat Agres. More information on the music/audio team here.

Editorial for Springer's Deep Learning for Music and Audio special issue

Prof. Ching-Hua Chuan and I recently edited a Special Issue for Springer's Neural Computing and Applications (IF: 4.213). The idea for the issue came out of the 1st International Workshop on Deep Learning for Music that we organized in Anchorage, US, as part of IJCNN in 2017. We received a nice collection of very interesting articles from scholars all over the world. The issue is set to come out soon (stay tuned).

New MOE Tier 2 grant on music generation with emotion that matches video

Having done my postdoc and PhD on music generation (see MorpheuS project), I am happy to announce that I am the PI of a new MOE Tier 2 grant on 'aiMuVi: AI Music generated from Videos' with SGD 648,216 in funding. My co-PIs on this project are Prof. Gemma Roig from SUTD, Prof. Eran Gozy from MIT (creator of Guitar Hero), and collaborator Prof. Kat Agres from A*STAR/NUS.

Abstract of the proposal:

New paper on Singing Voice Estimation in Neural Computing and Applications (Springer)

Together with Eward Lin, Enyan Koh, Dr. Balamurali BT from SUTD, and Dr. Simon Lui from Tencent Music (former SUTD) we published a paper on using an ideal binary mask with CNN for separating singing voice from its musical accompaniment:

Lin K.W.E., BT B, Koh E., Lui S., Herremans D.. In Press. Singing Voice Separation Using a Deep Convolutional Neural Network Trained by Ideal Binary Mask and Cross Entropy. Neural Computing and Applications. DOI: 10.1007/s00521-018-3933-z.

New publication on modeling music with word2vec in Springer's Neural Computing and Applications

Together with Prof Ching-Hua Chuan from the University of Miami and Prof. Kat Agres from IHPC, A*STAR, I've just published a new article on 'From Context to Concept: Exploring Semantic Relationships in Music with Word2Vec', in Springer's Neural Computing and Applications (impact factor 4.213). The article describes how we can use word2vec to model complex polyphonic pieces of music using the popular embeddings model. The preprint is available here.

New Frontiers in Psychology paper on A Novel Graphical Interface for the Analysis of Music Practice Behaviors

The paper I wrote together with Janis Sokolovskis and Elaine Chew from QMUL, called A Novel Interface for the Graphical Analysis of Music Practice Behaviours was just poublished in Frontiers in Psychology - Human-Media Interaction. Read the full article here or download the pdf.

Grant from MIT-SUTD IDC on "An intelligent system for understanding and matching perceived emotion from video with music"

A few months ago, Prof. Gemma Roig (PI, SUTD), Prof. Dorien Herremans (co-PI, SUTD), Dr. Kat Agres (co-PI, A*STAR) and Dr. Eran Gozy (co-PI, MIT, creator of Guitar Hero) got awarded a new grant from the International Design Center (joint research institute of MIT and SUTD) for 'An intelligent system for understanding and matching perceived emotion from video with music'. This is an exiting opportunity and the birth of our new Affective Computing Lab at SUTD that links the computer vision lab and AMAAI lab.

New grant from MIT-SUTD IDC on an AR Game for Climate Change Awareness

Prof. Lucienne Blessing (co-PI, SUTD), Prof. Lynette Chea (Co-PI, SUTD) and Prof. Takehiko Nagakura (co-PI, MIT) and myself (PI) have just been awarded a grant from the International Design Center (MIT-SUTD). The topic of the grant is to build an Augmented Reality Game to stimulate Climate Change Awareness and support Project P280 within SUTD. The game design will be done by SUTD Game Lab.

Channel News Asia documentary on MorpheuS music generation algorithm

My research was featured in a documentary called 'Algorithms' by Channel News Asia. MorpheuS is a music generation algorithm developed by Prof. Elaine Chew and myself at Queen Mary University of London. In the documentary, a string quartet consisting of musicians from the Singapore Symphony Orchestra perform a piece composed by MorpheuS (at around 17min in), followed by an interview about my research (21min in).


Episode 1: Rage Against The Machine

Talk at One-North Festival on Music and AI: Generating Music and More!

I'll be giving an invited talk on Music and AI: Generating Music and More!, coming Friday at the One-North Festival. The one-north Festival is an annual celebration of research, innovation, creativity and enterprise. Jointly organised by the Agency for Science, Technology and Research (A*STAR) and JTC, and supported by Science Centre Singapore, this very exciting event welcomes the public to immerse in science and technology through fun interactive displays, inspiring talks, workshops and tours for the whole family.

Pages