Highlights/Upcoming events

Special issue published in Neural Computing and Applications

The special issue on Deep Learning for Music and Audio has been published in Neural Computing and Applications (Impact factor 4.664 (2018)). The guest editors for this special issue were Prof. Ching-Hua Chuan from the University of Miami and myself.

Congrats to PhD student Jyun for publishing a paper in ICASSP

PhD student Yin-Jyun Luo got his paper on 'Singing voice conversion with disentangled representations of singer and vocal technique using variational autoencoders' accepted for the upcoming ICASSP conference in Barcelona Spain. You can read the preprint on Arxiv.

Job opening for PhD students and Game Developers at SUTD Game Lab

Our team at SUTD Game Lab, directed by Prof. Dorien Herremans, is looking for:

PhD students in Game Research with focus on AI or AR

Do you love gaming and want to make it your specialisation? We are a vibrant team at Singapore University of Technology and Design that create serious games for industry and academia. Our team consists of artists, game designers, and game developers. As a PhD student, you will work on your own research ideas, possibly combined with one of the team's projects, on topics such as:

Best student paper for multimodal emotion prediction paper

Phd student Thao Phuang's paper on "Multimodal Deep Models for Predicting Affective Responses Evoked by Movies" was awarded best student paper at the 2nd International Workshop on Computer Vision for Physiological Measurement as part of ICCV in Seoul, South Korea. The paper explores how models based on video and audio can predict emotion of movies:

New paper on Multimodal Deep Models for Predicting Affective Responses Evoked by Movies

Together with my PhD student Thao and Prof. Gemma Roig (MIT/Frankfurt University), a new paper was published on "Multimodal Deep Models for Predicting Affective Responses Evoked by Movies" in the Proceedings of the 2nd International Workshop on Computer Vision for Physiological Measurement as part of ICCV. Seoul, South Korea. 2019. A preprint is available here.

Harmonic structure and altered states in trance music - new Oxford book chapter

Together with Dr. Kat Agres (NUS, Singapore) and Prof. Louis Bigo (University of Lille, France), I recently explored how harmonic structure influences altered states in uplifting trance music. "The Impact of Musical Structure on Enjoyment and Absorptive Listening States in Trance Music" is available as a chapter in Music and Consciousness II, a book published by Ruth Herbert, Eric Clarke and David Clarke.

Talk on deep belief networks for doppler invariant demodulation - IEEE APWCS

PhD student Abigail Leon from the AMAAI lab presented a paper at the 16th IEEE Asia Pacific Wireless Communications Symposium (APWCS) on "Doppler Invariant Demodulation for Shallow Water Acoustic Communications Using Deep Belief Networks".

New paper on multimodal emotion prediction models from video and audio

Just published a new article with my PhD student Thao Ha Thi Phuong and Prof. Gemma Roig on 'Multimodal Deep Models for Predicting Affective Responses Evoked by Movies'. The paper will be published in the proceedings of the 2nd International Workshop on Computer Vision for Physiological Measurement as part of ICCV; and will be presented by Thao in Seoul, South Korea. Anybody interested can download the preprint article here (link coming soon!). The source code of our model is available on github.

IEEE Conference on Games - talk on music game for cognitive and physical wellbeing for elderly

Today I gave a talk at the IEEE Conference on Games at Queen Mary University of London. The prototype game was developed as part of a UROP project led by Prof. Kat Agres (NUS), Prof. Simon Lui (Tencent), and myself (SUTD). Credits to the bulk of the development goes to Xuexuan Zhou!

The full game is described in our proceedings paper and the slides are available here:

Talk at Cognitive Science Conference in Montreal

The Cognitive Science Conference (CogSci) was held at Montreal, Canada this year. I presented a publication-based talk on 'Towards emotion based music generation: A tonal tension model based on the spiral array', which was based on a lot of the work done during my postdoc fellowship with Prof. Elaine Chew at QMUL (download short paper, see original full papers).

Postdoc fellow on music generation with emotion - opening

Our team at Singapore University of Technology and Design (SUTD) is looking for a postdoc fellow in automatic music generation. You will be joining our team in music/audio/vision AI supervised by Prof. Dorien Herremans, Prof. Gemma Roig and Prof. Kat Agres. More information on the music/audio team here.

Editorial for Springer's Deep Learning for Music and Audio special issue

Prof. Ching-Hua Chuan and I recently edited a Special Issue for Springer's Neural Computing and Applications (IF: 4.213). The idea for the issue came out of the 1st International Workshop on Deep Learning for Music that we organized in Anchorage, US, as part of IJCNN in 2017. We received a nice collection of very interesting articles from scholars all over the world. The issue is set to come out soon (stay tuned).

New MOE Tier 2 grant on music generation with emotion that matches video

Having done my postdoc and PhD on music generation (see MorpheuS project), I am happy to announce that I am the PI of a new MOE Tier 2 grant on 'aiMuVi: AI Music generated from Videos' with SGD 648,216 in funding. My co-PIs on this project are Prof. Gemma Roig from SUTD, Prof. Eran Gozy from MIT (creator of Guitar Hero), and collaborator Prof. Kat Agres from A*STAR/NUS.

Abstract of the proposal:

New paper on Singing Voice Estimation in Neural Computing and Applications (Springer)

Together with Eward Lin, Enyan Koh, Dr. Balamurali BT from SUTD, and Dr. Simon Lui from Tencent Music (former SUTD) we published a paper on using an ideal binary mask with CNN for separating singing voice from its musical accompaniment:

Lin K.W.E., BT B, Koh E., Lui S., Herremans D.. In Press. Singing Voice Separation Using a Deep Convolutional Neural Network Trained by Ideal Binary Mask and Cross Entropy. Neural Computing and Applications. DOI: 10.1007/s00521-018-3933-z.

New publication on modeling music with word2vec in Springer's Neural Computing and Applications

Together with Prof Ching-Hua Chuan from the University of Miami and Prof. Kat Agres from IHPC, A*STAR, I've just published a new article on 'From Context to Concept: Exploring Semantic Relationships in Music with Word2Vec', in Springer's Neural Computing and Applications (impact factor 4.213). The article describes how we can use word2vec to model complex polyphonic pieces of music using the popular embeddings model. The preprint is available here.

Pages