- Keynote at DMRN on controllable music generation
- New paper on Underwater Acoustic Communication Receiver Using Deep Belief Network
- CM-RNN: Hierarchical RNNs for structured music generation
- The Effect of Spectrogram Reconstructions on Automatic Music Transcription
- PhD scholarships for audio/music and AI
- Audio engineer - job opening (nnAudio)
- AttendAffectNet: Self-Attention based Networks for Predicting Affective Responses from Movies
- VAE for music generation with tension control
- AMAAI lab presentations at ISMIR2020
- Postdoctoral Researcher -- Neural Reasoning [job opening]
Interesting is doing MIR research in Singapore? Let me know, EU has released the call for Marie-Curie fellowships. Having been a MSCA fellow myself I can highly recommend these! https://ec.europa.eu/info/funding-tenders/opportunities/portal/screen/op...
Over the last few years, there’s been a steady growth in revenue from digital music. In just six years, revenue from music streaming moved from zero to 40 percent of the overall global recorded music industry revenues, according to a report by IFPI. With revenues to the tune of 11.2 billion dollars a year, the digital model is only set to grow. So “is there still room for a traditional record company?”
Read more in the interview I gave to the SUTD Aspire newsletter about my recent keynote in TechHR.
The special issue on Deep Learning for Music and Audio has been published in Neural Computing and Applications (Impact factor 4.664 (2018)). The guest editors for this special issue were Prof. Ching-Hua Chuan from the University of Miami and myself.
PhD student Yin-Jyun Luo got his paper on 'Singing voice conversion with disentangled representations of singer and vocal technique using variational autoencoders' accepted for the upcoming ICASSP conference in Barcelona Spain. You can read the preprint on Arxiv.
Our team at SUTD Game Lab, directed by Prof. Dorien Herremans, is looking for:
PhD students in Game Research with focus on AI or AR
Do you love gaming and want to make it your specialisation? We are a vibrant team at Singapore University of Technology and Design that create serious games for industry and academia. Our team consists of artists, game designers, and game developers. As a PhD student, you will work on your own research ideas, possibly combined with one of the team's projects, on topics such as:
Phd student Thao Phuang's paper on "Multimodal Deep Models for Predicting Affective Responses Evoked by Movies" was awarded best student paper at the 2nd International Workshop on Computer Vision for Physiological Measurement as part of ICCV in Seoul, South Korea. The paper explores how models based on video and audio can predict emotion of movies:
Together with my PhD student Thao and Prof. Gemma Roig (MIT/Frankfurt University), a new paper was published on "Multimodal Deep Models for Predicting Affective Responses Evoked by Movies" in the Proceedings of the 2nd International Workshop on Computer Vision for Physiological Measurement as part of ICCV. Seoul, South Korea. 2019. A preprint is available here.
Together with Dr. Kat Agres (NUS, Singapore) and Prof. Louis Bigo (University of Lille, France), I recently explored how harmonic structure influences altered states in uplifting trance music. "The Impact of Musical Structure on Enjoyment and Absorptive Listening States in Trance Music" is available as a chapter in Music and Consciousness II, a book published by Ruth Herbert, Eric Clarke and David Clarke.
One of the most comprehensible machine learning instructors (Andrew Ng), has just released a new book. I found it an interesting and smooth read with focus on a few important issues that any data science / AI person will encounter. Highly recommend the read. More info on the book and how to get it.
Just published a new article with my PhD student Thao Ha Thi Phuong and Prof. Gemma Roig on 'Multimodal Deep Models for Predicting Affective Responses Evoked by Movies'. The paper will be published in the proceedings of the 2nd International Workshop on Computer Vision for Physiological Measurement as part of ICCV; and will be presented by Thao in Seoul, South Korea. Anybody interested can download the preprint article here (link coming soon!). The source code of our model is available on github.
Today I gave a talk at the IEEE Conference on Games at Queen Mary University of London. The prototype game was developed as part of a UROP project led by Prof. Kat Agres (NUS), Prof. Simon Lui (Tencent), and myself (SUTD). Credits to the bulk of the development goes to Xuexuan Zhou!
The full game is described in our proceedings paper and the slides are available here:
The Cognitive Science Conference (CogSci) was held at Montreal, Canada this year. I presented a publication-based talk on 'Towards emotion based music generation: A tonal tension model based on the spiral array', which was based on a lot of the work done during my postdoc fellowship with Prof. Elaine Chew at QMUL (download short paper, see original full papers).
Ever wondered if popular techniques from NLP can be ported to music? In the below article on Towards Data Science, I elaborate on my recent paper in Neural Computing and Applications with Prof. Ching-Hua Chuan and Prof. Kat agres on using word2vec for polyphonic music. What do you think? Leave your comments below on Medium!
Our team at Singapore University of Technology and Design (SUTD) is looking for a postdoc fellow in automatic music generation. You will be joining our team in music/audio/vision AI supervised by Prof. Dorien Herremans, Prof. Gemma Roig and Prof. Kat Agres. More information on the music/audio team here.
Prof. Ching-Hua Chuan and I recently edited a Special Issue for Springer's Neural Computing and Applications (IF: 4.213). The idea for the issue came out of the 1st International Workshop on Deep Learning for Music that we organized in Anchorage, US, as part of IJCNN in 2017. We received a nice collection of very interesting articles from scholars all over the world. The issue is set to come out soon (stay tuned).
Having done my postdoc and PhD on music generation (see MorpheuS project), I am happy to announce that I am the PI of a new MOE Tier 2 grant on 'aiMuVi: AI Music generated from Videos' with SGD 648,216 in funding. My co-PIs on this project are Prof. Gemma Roig from SUTD, Prof. Eran Gozy from MIT (creator of Guitar Hero), and collaborator Prof. Kat Agres from A*STAR/NUS.
Abstract of the proposal: