Highlights/Upcoming events

New paper on Underwater Acoustic Communication Receiver Using Deep Belief Network

Recent PhD graduate Dr. Abigail Lee-Leon, Prof Chau Yuen, and myself just published a paper on 'Underwater Acoustic Communication Receiver Using Deep Belief Network' in IEEE Transactions on Communications. Preprint link. Underwater communications is a challenging field due to the many interferences in the channel (e.g. Doppler effect, boats, fish, etc.). This paper uses a novel deep learning approach to model the receiver.

CM-RNN: Hierarchical RNNs for structured music generation

Nicolas Guo, Dr. Dimos Markis and myself just published a new paper on Hierarchical Recurrent Neural Networks for Conditional Melody Generation with Long-term Structure. Inspired by methods from the audio domain, such as SampleRNN, we explore how we can generate melodies, conditioned by chords by inputting training data in multiple granularities.

The Effect of Spectrogram Reconstructions on Automatic Music Transcription

Congrats Kin Wai (Raven), on this interesting paper on leveraging spectrogram reconstruction for music transcription, which was accepted at the International Conference on Pattern Recognition (ICPR2020). Read the preprint here.

Cheuk K.W., Luo Y.J., Benetos E., Herremans D.. 2021. The Effect of Spectrogram Reconstructions on Automatic Music Transcription: An Alternative Approach to Improve Transcription Accuracy. Proceedings of the International Conference on Pattern Recognition (ICPR2020).

PhD scholarships for audio/music and AI

Singa is offering fellowships for International PhD students in Singapore. If you are interested in working in the AMAAI lab, send me a message. I am interested in supervising PhD students interested in the domain of Music Information Retrieval or AI for multimedia or finance.

More details on the application: https://www.a-star.edu.sg/Scholarships/for-graduate-studies/singapore-in...

Deadline for applications: January 1st!

Tags: 

Audio engineer - job opening (nnAudio)

Our team at Singapore University of Technology and Design (SUTD) is looking for an RA for 6 months to help develop nnAudio. You will be joining our team in music/audio/vision AI supervised by Prof. Dorien Herremans. More information on the music/audio team at dorienherremans.com/team. You will be working on the PyTorch audio processing tool nnAudio developed by Cheuk Kin Wai at our lab.

AttendAffectNet: Self-Attention based Networks for Predicting Affective Responses from Movies

PhD student's Thao Phuong's paper on multimodal emotion prediction from movies/music is now available on Arxiv, together with the code. AttendAffectNet uses transformers with feature-based attention to attend to the most useful features at any given time to predict the valence/arousal.

Ha Thi Phuong Thao, Balamurali B.T., Dorien Herremans, Gemma Roig, 2020. AttendAffectNet: Self-Attention based Networks for Predicting Affective Responses from Movies. arXiv:2010.11188

Preprint paper.

Postdoctoral Researcher -- Neural Reasoning [job opening]

Your goal is to develop novel approaches to neural reasoning starting from textually defined tasks in 50% of his/her time and to work on multimodal sarcasm detection in the other 50% time. You will be working with Prof. Alex Binder, Prof. Soujanya Poria, and Prof. Dorien Herremans.

The first task on neural reasoning will include work packages such as:

nnAudio, our on-the-fly GPU spectrogram extraction toolbox published in IEEE Access

Congratulations to Raven for publishing 'nnAudio: An on-the-fly GPU Audio to Spectrogram Conversion Toolbox Using 1D Convolutional Neural Networks', in IEEE Access. nnAudio allows you to calculate spectrograms (linear, log, Mel, CQT) on-the-fly as a layer in PyTorch. This makes the spectrograms finetunable to your task! nnAudio is easy to install with pip, see instructions at https://github.com/KinWaiCheuk/nnAudio

Congratulations Thao on passing your Preliminary exam on multimodal emotion prediction models

Thao Phuong, PhD student supervised by Prof. Gemma Roig and myself just passed her preliminary exam! Thao's work is on predicting valence and arousal from both video as well as audio. Her multimodal models have been published (and some more under review). You can read about them here.

Congrats to Abigail on finishing her PhD on deep learning for underwater communication

Abigail Leon has successfully defended her PhD today with main supervisor Prof. Yuen Chau and myself as co-supervisor. Abigail has successfully explored how deep learning techniques can be used to denoise en demodulate complex underwater acoustic communication signals, and has performed some sea-trials to gather data for this. Since the PhD is under an NDA with Thales, we cannot post it, however, check out some of Abigail's papers here (more to come after the review process ends).

Exclusive PhD studentships in AI and audio/music/finance/emotion

AI.SG is offering scholarships to top students who are interested in AI research. My lab at Singapore University of Technology and Design has positions for those interested in research on AI for audio/music/finance and NLP/emotion. More information about the lab and our projects: dorienherremans.com

Since this is an elite scholarship you can only apply if you have:
- high GPA
- Master's degree, or existing core ranked publications

PyTorch GPU based audio processing toolkit: nnAudio

Looking for a tool to extract spectrograms on the fly, integrated as a layer in PyTorch? Look no further than nnAudio, a toolbox developed by PhD student Raven (Cheuk Kin Wai): https://github.com/KinWaiCheuk/nnAudio

nnAudio is available in pip (pip install nnaudio), full documentation available on the github page. Also check out our dedicated paper:

New paper on perceptionGAN - real-world image construction through perceptual understanding

In a collaboration with IIT, India and SUTD, we've published a paper on our new perceptionGAN system in the Proceedings of the 4th Int. Conf. on Imaging, Vision and Pattern Recognition (IVPR), and 9th Int. Conf. on Informatics, Electronics & Vision (ICIEV). Read the preprint.

AMAAI MIR Webinars

SUTD's AMAAI lab is organizing online seminars of (worldwide) graduate students active in Music Information Retrieval (MIR), or more generally music/audio and AI. The aim is to connect different labs working on similar topics and enable international collaboration. Participating universities include SUTD, QMUL,...

The Webinars will be organized on Wednesdays at 4pm Singapore time (9am UK time - 10am EU time).

Pages