Together with Dr. Kat Agres (NUS, Singapore) and Prof. Louis Bigo (University of Lille, France), I recently explored how harmonic structure influences altered states in uplifting trance music. "The Impact of Musical Structure on Enjoyment and Absorptive Listening States in Trance Music" is available as a chapter in Music and Consciousness II, a book published by Ruth Herbert, Eric Clarke and David Clarke.
- Postdoctoral Researcher -- Neural Reasoning [job opening]
- Research assistant in Music and AI
- nnAudio, our on-the-fly GPU spectrogram extraction toolbox published in IEEE Access
- Congratulations Thao on passing your Preliminary exam on multimodal emotion prediction models
- Congrats to Abigail on finishing her PhD on deep learning for underwater communication
- Exclusive PhD studentships in AI and audio/music/finance/emotion
- Research Assistant in NLP/sequential models for finance [job opening]
- Congrats Hieu on successfully defending his PhD!
- PyTorch GPU based audio processing toolkit: nnAudio
- New paper on perceptionGAN - real-world image construction through perceptual understanding
One of the most comprehensible machine learning instructors (Andrew Ng), has just released a new book. I found it an interesting and smooth read with focus on a few important issues that any data science / AI person will encounter. Highly recommend the read. More info on the book and how to get it.
Just published a new article with my PhD student Thao Ha Thi Phuong and Prof. Gemma Roig on 'Multimodal Deep Models for Predicting Affective Responses Evoked by Movies'. The paper will be published in the proceedings of the 2nd International Workshop on Computer Vision for Physiological Measurement as part of ICCV; and will be presented by Thao in Seoul, South Korea. Anybody interested can download the preprint article here (link coming soon!). The source code of our model is available on github.
Today I gave a talk at the IEEE Conference on Games at Queen Mary University of London. The prototype game was developed as part of a UROP project led by Prof. Kat Agres (NUS), Prof. Simon Lui (Tencent), and myself (SUTD). Credits to the bulk of the development goes to Xuexuan Zhou!
The full game is described in our proceedings paper and the slides are available here:
The Cognitive Science Conference (CogSci) was held at Montreal, Canada this year. I presented a publication-based talk on 'Towards emotion based music generation: A tonal tension model based on the spiral array', which was based on a lot of the work done during my postdoc fellowship with Prof. Elaine Chew at QMUL (download short paper, see original full papers).
Ever wondered if popular techniques from NLP can be ported to music? In the below article on Towards Data Science, I elaborate on my recent paper in Neural Computing and Applications with Prof. Ching-Hua Chuan and Prof. Kat agres on using word2vec for polyphonic music. What do you think? Leave your comments below on Medium!
Our team at Singapore University of Technology and Design (SUTD) is looking for a postdoc fellow in automatic music generation. You will be joining our team in music/audio/vision AI supervised by Prof. Dorien Herremans, Prof. Gemma Roig and Prof. Kat Agres. More information on the music/audio team here.
Prof. Ching-Hua Chuan and I recently edited a Special Issue for Springer's Neural Computing and Applications (IF: 4.213). The idea for the issue came out of the 1st International Workshop on Deep Learning for Music that we organized in Anchorage, US, as part of IJCNN in 2017. We received a nice collection of very interesting articles from scholars all over the world. The issue is set to come out soon (stay tuned).
Having done my postdoc and PhD on music generation (see MorpheuS project), I am happy to announce that I am the PI of a new MOE Tier 2 grant on 'aiMuVi: AI Music generated from Videos' with SGD 648,216 in funding. My co-PIs on this project are Prof. Gemma Roig from SUTD, Prof. Eran Gozy from MIT (creator of Guitar Hero), and collaborator Prof. Kat Agres from A*STAR/NUS.
Abstract of the proposal:
Together with Eward Lin, Enyan Koh, Dr. Balamurali BT from SUTD, and Dr. Simon Lui from Tencent Music (former SUTD) we published a paper on using an ideal binary mask with CNN for separating singing voice from its musical accompaniment:
Lin K.W.E., BT B, Koh E., Lui S., Herremans D.. In Press. Singing Voice Separation Using a Deep Convolutional Neural Network Trained by Ideal Binary Mask and Cross Entropy. Neural Computing and Applications. DOI: 10.1007/s00521-018-3933-z.
Together with Prof Ching-Hua Chuan from the University of Miami and Prof. Kat Agres from IHPC, A*STAR, I've just published a new article on 'From Context to Concept: Exploring Semantic Relationships in Music with Word2Vec', in Springer's Neural Computing and Applications (impact factor 4.213). The article describes how we can use word2vec to model complex polyphonic pieces of music using the popular embeddings model. The preprint is available here.
A few months ago, Prof. Gemma Roig (PI, SUTD), Prof. Dorien Herremans (co-PI, SUTD), Dr. Kat Agres (co-PI, A*STAR) and Dr. Eran Gozy (co-PI, MIT, creator of Guitar Hero) got awarded a new grant from the International Design Center (joint research institute of MIT and SUTD) for 'An intelligent system for understanding and matching perceived emotion from video with music'. This is an exiting opportunity and the birth of our new Affective Computing Lab at SUTD that links the computer vision lab and AMAAI lab.
Prof. Lucienne Blessing (co-PI, SUTD), Prof. Lynette Chea (Co-PI, SUTD) and Prof. Takehiko Nagakura (co-PI, MIT) and myself (PI) have just been awarded a grant from the International Design Center (MIT-SUTD). The topic of the grant is to build an Augmented Reality Game to stimulate Climate Change Awareness and support Project P280 within SUTD. The game design will be done by SUTD Game Lab.
My research was featured in a documentary called 'Algorithms' by Channel News Asia. MorpheuS is a music generation algorithm developed by Prof. Elaine Chew and myself at Queen Mary University of London. In the documentary, a string quartet consisting of musicians from the Singapore Symphony Orchestra perform a piece composed by MorpheuS (at around 17min in), followed by an interview about my research (21min in).
I'll be giving an invited talk on Music and AI: Generating Music and More!, coming Friday at the One-North Festival. The one-north Festival is an annual celebration of research, innovation, creativity and enterprise. Jointly organised by the Agency for Science, Technology and Research (A*STAR) and JTC, and supported by Science Centre Singapore, this very exciting event welcomes the public to immerse in science and technology through fun interactive displays, inspiring talks, workshops and tours for the whole family.