The Cognitive Science Conference (CogSci) was held at Montreal, Canada this year. I presented a publication-based talk on 'Towards emotion based music generation: A tonal tension model based on the spiral array', which was based on a lot of the work done during my postdoc fellowship with Prof. Elaine Chew at QMUL (download short paper, see original full papers).
- aiSTROM -- A roadmap for developing a successful AI strategy
- Book chapter on Musical stylometry: Characterisation of music
- Joint internship with Sounders Music
- Meet My Lab - podcast from Euraxess
- New roadmap paper on the role of music technology for health care and well-being
- Three IJCNN papers from the AMAAI lab this year!
- Research assistant jobs in Music/Audio and AI
- Keynote at DMRN on controllable music generation
- New paper on Underwater Acoustic Communication Receiver Using Deep Belief Network
- CM-RNN: Hierarchical RNNs for structured music generation
Ever wondered if popular techniques from NLP can be ported to music? In the below article on Towards Data Science, I elaborate on my recent paper in Neural Computing and Applications with Prof. Ching-Hua Chuan and Prof. Kat agres on using word2vec for polyphonic music. What do you think? Leave your comments below on Medium!
Our team at Singapore University of Technology and Design (SUTD) is looking for a postdoc fellow in automatic music generation. You will be joining our team in music/audio/vision AI supervised by Prof. Dorien Herremans, Prof. Gemma Roig and Prof. Kat Agres. More information on the music/audio team here.
Prof. Ching-Hua Chuan and I recently edited a Special Issue for Springer's Neural Computing and Applications (IF: 4.213). The idea for the issue came out of the 1st International Workshop on Deep Learning for Music that we organized in Anchorage, US, as part of IJCNN in 2017. We received a nice collection of very interesting articles from scholars all over the world. The issue is set to come out soon (stay tuned).
Having done my postdoc and PhD on music generation (see MorpheuS project), I am happy to announce that I am the PI of a new MOE Tier 2 grant on 'aiMuVi: AI Music generated from Videos' with SGD 648,216 in funding. My co-PIs on this project are Prof. Gemma Roig from SUTD, Prof. Eran Gozy from MIT (creator of Guitar Hero), and collaborator Prof. Kat Agres from A*STAR/NUS.
Abstract of the proposal:
Together with Eward Lin, Enyan Koh, Dr. Balamurali BT from SUTD, and Dr. Simon Lui from Tencent Music (former SUTD) we published a paper on using an ideal binary mask with CNN for separating singing voice from its musical accompaniment:
Lin K.W.E., BT B, Koh E., Lui S., Herremans D.. In Press. Singing Voice Separation Using a Deep Convolutional Neural Network Trained by Ideal Binary Mask and Cross Entropy. Neural Computing and Applications. DOI: 10.1007/s00521-018-3933-z.
Together with Prof Ching-Hua Chuan from the University of Miami and Prof. Kat Agres from IHPC, A*STAR, I've just published a new article on 'From Context to Concept: Exploring Semantic Relationships in Music with Word2Vec', in Springer's Neural Computing and Applications (impact factor 4.213). The article describes how we can use word2vec to model complex polyphonic pieces of music using the popular embeddings model. The preprint is available here.
A few months ago, Prof. Gemma Roig (PI, SUTD), Prof. Dorien Herremans (co-PI, SUTD), Dr. Kat Agres (co-PI, A*STAR) and Dr. Eran Gozy (co-PI, MIT, creator of Guitar Hero) got awarded a new grant from the International Design Center (joint research institute of MIT and SUTD) for 'An intelligent system for understanding and matching perceived emotion from video with music'. This is an exiting opportunity and the birth of our new Affective Computing Lab at SUTD that links the computer vision lab and AMAAI lab.
Prof. Lucienne Blessing (co-PI, SUTD), Prof. Lynette Chea (Co-PI, SUTD) and Prof. Takehiko Nagakura (co-PI, MIT) and myself (PI) have just been awarded a grant from the International Design Center (MIT-SUTD). The topic of the grant is to build an Augmented Reality Game to stimulate Climate Change Awareness and support Project P280 within SUTD. The game design will be done by SUTD Game Lab.
My research was featured in a documentary called 'Algorithms' by Channel News Asia. MorpheuS is a music generation algorithm developed by Prof. Elaine Chew and myself at Queen Mary University of London. In the documentary, a string quartet consisting of musicians from the Singapore Symphony Orchestra perform a piece composed by MorpheuS (at around 17min in), followed by an interview about my research (21min in).
I'll be giving an invited talk on Music and AI: Generating Music and More!, coming Friday at the One-North Festival. The one-north Festival is an annual celebration of research, innovation, creativity and enterprise. Jointly organised by the Agency for Science, Technology and Research (A*STAR) and JTC, and supported by Science Centre Singapore, this very exciting event welcomes the public to immerse in science and technology through fun interactive displays, inspiring talks, workshops and tours for the whole family.
I'm very happy to announce that Natalie Angus will be doing her PhD defence on Real-Time Binaural Auralization on August 1st. Natalie is supervised by myself and Dr. Simon Lui (Director of Tencent Music). Download and read her thesis here. Some information about the seminar:
Title: Real-Time Binaural Auralization
When: 15:00, August 1st, 2018.
Where: Think Thank 20 (Building 2, Level 3), SUTD
I am excited to announce the following job openings for a new SUTD-MIT Project on emotion recognition in video and audio, start date is as soon as possible.
We are seeking a postdoctoral fellow and a research engineer to work on an affective computing research project related to emotion recognition for videos and music. The PIs of the project include Prof. Gemma Roig, Prof. Dorien Herremans, and Dr. Kat Agres. The position is for one year extendable to 2 years.
I am excited to give a talk at the upcoming Data Science Women in Data Science (WiDS) 2018 event taking place on 11 April 2018 at Singapore University of Technology & Design (SUTD). This is part of the growing global WiDS community and conference series, taking place in 150+ locations around the world coinciding with the WiDS Stanford on 5 March 2018. #sheinnovates
Today, prof. Wang Ye, general chair of this year's ISMIR, was nice enough to invite me to teach a guest lecture in his Sound and Music Computing class at NUS. The class was introduced by a lecture of Prof. Lonce Wyse, who talked about sound modelling with machine learning. In my talk, we focused on music modelling from a generative perspective. For those interested, you can download my slides here.