- Audio engineer - job opening (nnAudio)
- AttendAffectNet: Self-Attention based Networks for Predicting Affective Responses from Movies
- The Effect of Spectrogram Reconstructions on Automatic Music Transcription
- VAE for music generation with tension control
- AMAAI lab presentations at ISMIR2020
- PhD scholarships for audio/music and AI
- Postdoctoral Researcher -- Neural Reasoning [job opening]
- Research assistant in Music and AI
- nnAudio, our on-the-fly GPU spectrogram extraction toolbox published in IEEE Access
- Congratulations Thao on passing your Preliminary exam on multimodal emotion prediction models
A few months ago, Prof. Gemma Roig (PI, SUTD), Prof. Dorien Herremans (co-PI, SUTD), Dr. Kat Agres (co-PI, A*STAR) and Dr. Eran Gozy (co-PI, MIT, creator of Guitar Hero) got awarded a new grant from the International Design Center (joint research institute of MIT and SUTD) for 'An intelligent system for understanding and matching perceived emotion from video with music'. This is an exiting opportunity and the birth of our new Affective Computing Lab at SUTD that links the computer vision lab and AMAAI lab.
Prof. Lucienne Blessing (co-PI, SUTD), Prof. Lynette Chea (Co-PI, SUTD) and Prof. Takehiko Nagakura (co-PI, MIT) and myself (PI) have just been awarded a grant from the International Design Center (MIT-SUTD). The topic of the grant is to build an Augmented Reality Game to stimulate Climate Change Awareness and support Project P280 within SUTD. The game design will be done by SUTD Game Lab.
My research was featured in a documentary called 'Algorithms' by Channel News Asia. MorpheuS is a music generation algorithm developed by Prof. Elaine Chew and myself at Queen Mary University of London. In the documentary, a string quartet consisting of musicians from the Singapore Symphony Orchestra perform a piece composed by MorpheuS (at around 17min in), followed by an interview about my research (21min in).
I'll be giving an invited talk on Music and AI: Generating Music and More!, coming Friday at the One-North Festival. The one-north Festival is an annual celebration of research, innovation, creativity and enterprise. Jointly organised by the Agency for Science, Technology and Research (A*STAR) and JTC, and supported by Science Centre Singapore, this very exciting event welcomes the public to immerse in science and technology through fun interactive displays, inspiring talks, workshops and tours for the whole family.
I'm very happy to announce that Natalie Angus will be doing her PhD defence on Real-Time Binaural Auralization on August 1st. Natalie is supervised by myself and Dr. Simon Lui (Director of Tencent Music). Download and read her thesis here. Some information about the seminar:
Title: Real-Time Binaural Auralization
When: 15:00, August 1st, 2018.
Where: Think Thank 20 (Building 2, Level 3), SUTD
I am excited to announce the following job openings for a new SUTD-MIT Project on emotion recognition in video and audio, start date is as soon as possible.
We are seeking a postdoctoral fellow and a research engineer to work on an affective computing research project related to emotion recognition for videos and music. The PIs of the project include Prof. Gemma Roig, Prof. Dorien Herremans, and Dr. Kat Agres. The position is for one year extendable to 2 years.
I am excited to give a talk at the upcoming Data Science Women in Data Science (WiDS) 2018 event taking place on 11 April 2018 at Singapore University of Technology & Design (SUTD). This is part of the growing global WiDS community and conference series, taking place in 150+ locations around the world coinciding with the WiDS Stanford on 5 March 2018. #sheinnovates
Today, prof. Wang Ye, general chair of this year's ISMIR, was nice enough to invite me to teach a guest lecture in his Sound and Music Computing class at NUS. The class was introduced by a lecture of Prof. Lonce Wyse, who talked about sound modelling with machine learning. In my talk, we focused on music modelling from a generative perspective. For those interested, you can download my slides here.
I'm happy to announce that I just got certified as an official NVIDIA Deep Learning Institute instructor and ambassador.
"The NVIDIA Deep Learning Institute (DLI) offers hands-on training for developers, data scientists, and researchers looking to solve the world’s most challenging problems with deep learning.
I'm please to announce the latest article I wrote together with Prof. Elaine Chew on MorpheuS, published in IEEE Transactions on Affective Computing. The paper explains the inner workings of MorpheuS, a music generation system that is able to generate pieces with a fixed pattern structure and given tension.
Herremans D., Chew E.. 2017. MorpheuS: generating structured music with constrained patterns and tension. IEEE Transactions on Affective Computing. PP (In Press)(99)
One of the co-founders of my favourite music notation software (Musescore) visited Singapore last week to give a COIL seminar entitled: 'MuseScore: Inside A Successful Open Source Project For Musicians'. For those of you who missed it, you can see a recorded version of the talk here:
A new journal article was published that I wrote together with Prof. Ching-Hua Chuan and Prof. Elaine Chew. The article is a survey on current music generation systems from a functional point of view, thus creating a nice overview of current challenges and opportunities in the field. The articles covers systems ranging from game music, to real-time improvisation systems and emotional movie music generation systems.
Last week the National University of Singapore hosted the International Society for Music Information Retrieval conference in lovely Suzhou, China. It featured a ton of interesting presentations by established academics in the field including Prof. Elaine Chew (who also talked about MorpheuS), Roger Dannenberg and others; but also industry leaders such as Jeffrey C. Smith (Smule) and E. Humprey (Spotify).
Update: while this particular grant has expired, there are some other opportunities available in my lab.
I am looking for a strong PhD candidate in music and machine learning at Singapore University of Technology and Design (SUTD). SUTD is a relatively new university, founded in collaboration with MIT, that has a strong interdisciplinary focus on design. The available PhD position is at the department of Information Systems Technology and Design (ISTD).
Last week, I started as an Assistant Professor at the Information Systems and Technology pillar at Singapore University of Technology and Design (SUTD), co-founded by MIT.
"The Singapore University of Technology and Design (SUTD) is the fourth autonomous university to be established in Singapore. SUTD's mission is to advance knowledge and nurture technically grounded leaders and innovators to serve societal needs...
I am pleased to announce that I have been elevated to the rank of Senior Member of the Institute of Electrical and Electronics Engineers (IEEE). Only 8 percent of IEEE members have attained the level of senior member. Senior Membership is an honor bestowed only to those who have made significant contributions to the profession. They are recognised for their technical and professional excellence, achievements, publications and course development or technical direction in IEEE-designated fields. I am particularly active in the IEEE Computational Intelligence Society.