- Keynote at DMRN on controllable music generation
- New paper on Underwater Acoustic Communication Receiver Using Deep Belief Network
- CM-RNN: Hierarchical RNNs for structured music generation
- The Effect of Spectrogram Reconstructions on Automatic Music Transcription
- PhD scholarships for audio/music and AI
- Audio engineer - job opening (nnAudio)
- AttendAffectNet: Self-Attention based Networks for Predicting Affective Responses from Movies
- VAE for music generation with tension control
- AMAAI lab presentations at ISMIR2020
- Postdoctoral Researcher -- Neural Reasoning [job opening]
Recent PhD graduate Dr. Abigail Lee-Leon, Prof Chau Yuen, and myself just published a paper on 'Underwater Acoustic Communication Receiver Using Deep Belief Network' in IEEE Transactions on Communications. Preprint link. Underwater communications is a challenging field due to the many interferences in the channel (e.g. Doppler effect, boats, fish, etc.). This paper uses a novel deep learning approach to model the receiver.
Nicolas Guo, Dr. Dimos Markis and myself just published a new paper on Hierarchical Recurrent Neural Networks for Conditional Melody Generation with Long-term Structure. Inspired by methods from the audio domain, such as SampleRNN, we explore how we can generate melodies, conditioned by chords by inputting training data in multiple granularities.
Congrats Kin Wai (Raven), on this interesting paper on leveraging spectrogram reconstruction for music transcription, which was accepted at the International Conference on Pattern Recognition (ICPR2020). Read the preprint here.
Cheuk K.W., Luo Y.J., Benetos E., Herremans D.. 2021. The Effect of Spectrogram Reconstructions on Automatic Music Transcription: An Alternative Approach to Improve Transcription Accuracy. Proceedings of the International Conference on Pattern Recognition (ICPR2020).
Singa is offering fellowships for International PhD students in Singapore. If you are interested in working in the AMAAI lab, send me a message. I am interested in supervising PhD students interested in the domain of Music Information Retrieval or AI for multimedia or finance.
More details on the application: https://www.a-star.edu.sg/Scholarships/for-graduate-studies/singapore-in...
Deadline for applications: January 1st!
Our team at Singapore University of Technology and Design (SUTD) is looking for an RA for 6 months to help develop nnAudio. You will be joining our team in music/audio/vision AI supervised by Prof. Dorien Herremans. More information on the music/audio team at dorienherremans.com/team. You will be working on the PyTorch audio processing tool nnAudio developed by Cheuk Kin Wai at our lab.
PhD student's Thao Phuong's paper on multimodal emotion prediction from movies/music is now available on Arxiv, together with the code. AttendAffectNet uses transformers with feature-based attention to attend to the most useful features at any given time to predict the valence/arousal.
Ha Thi Phuong Thao, Balamurali B.T., Dorien Herremans, Gemma Roig, 2020. AttendAffectNet: Self-Attention based Networks for Predicting Affective Responses from Movies. arXiv:2010.11188
Congrats to Rui Guo, who was an intern at AMAAI Lab, SUTD, published a paper on 'A variational autoencoder for music generation controlled by tonal tension', which will be presented next week at 'The 2020 Joint Conference on AI Music Creativity'.
Hao Hao Tan and Jyun Luo will be presenting their work at ISMIR2020 This week!
Tan H.H., Herremans D.. 2020. Music FaderNets: Controllable Music Generation Based On High-Level Features via Low-Level Feature Modelling. Proceedings of the International Society of Music Information Retrieval (ISMIR).. Preprint link.
Your goal is to develop novel approaches to neural reasoning starting from textually defined tasks in 50% of his/her time and to work on multimodal sarcasm detection in the other 50% time. You will be working with Prof. Alex Binder, Prof. Soujanya Poria, and Prof. Dorien Herremans.
The first task on neural reasoning will include work packages such as:
Our team at Singapore University of Technology and Design (SUTD) is looking for an RA in music and AI. You will be joining our team in music/audio/vision AI supervised by Prof. Dorien Herremans. More information on the music/audio team here. You will be working on a year project called ''aiMuVi: AI Music generated from Videos", which focuses on automatically generating music by specifying the emotion/tension throughout the musical piece so that it matches the emotion content extracted from videos. Related MIR topics may be explored.
Congratulations to Raven for publishing 'nnAudio: An on-the-fly GPU Audio to Spectrogram Conversion Toolbox Using 1D Convolutional Neural Networks', in IEEE Access. nnAudio allows you to calculate spectrograms (linear, log, Mel, CQT) on-the-fly as a layer in PyTorch. This makes the spectrograms finetunable to your task! nnAudio is easy to install with pip, see instructions at https://github.com/KinWaiCheuk/nnAudio
Thao Phuong, PhD student supervised by Prof. Gemma Roig and myself just passed her preliminary exam! Thao's work is on predicting valence and arousal from both video as well as audio. Her multimodal models have been published (and some more under review). You can read about them here.
Abigail Leon has successfully defended her PhD today with main supervisor Prof. Yuen Chau and myself as co-supervisor. Abigail has successfully explored how deep learning techniques can be used to denoise en demodulate complex underwater acoustic communication signals, and has performed some sea-trials to gather data for this. Since the PhD is under an NDA with Thales, we cannot post it, however, check out some of Abigail's papers here (more to come after the review process ends).
AI.SG is offering scholarships to top students who are interested in AI research. My lab at Singapore University of Technology and Design has positions for those interested in research on AI for audio/music/finance and NLP/emotion. More information about the lab and our projects: dorienherremans.com
Since this is an elite scholarship you can only apply if you have:
- high GPA
- Master's degree, or existing core ranked publications
Looking for a tool to extract spectrograms on the fly, integrated as a layer in PyTorch? Look no further than nnAudio, a toolbox developed by PhD student Raven (Cheuk Kin Wai): https://github.com/KinWaiCheuk/nnAudio
nnAudio is available in pip (pip install nnaudio), full documentation available on the github page. Also check out our dedicated paper:
In a collaboration with IIT, India and SUTD, we've published a paper on our new perceptionGAN system in the Proceedings of the 4th Int. Conf. on Imaging, Vision and Pattern Recognition (IVPR), and 9th Int. Conf. on Informatics, Electronics & Vision (ICIEV). Read the preprint.
SUTD's AMAAI lab is organizing online seminars of (worldwide) graduate students active in Music Information Retrieval (MIR), or more generally music/audio and AI. The aim is to connect different labs working on similar topics and enable international collaboration. Participating universities include SUTD, QMUL,...
The Webinars will be organized on Wednesdays at 4pm Singapore time (9am UK time - 10am EU time).