Highlights/Upcoming events

Postdoctoral Researcher -- Neural Reasoning [job opening]

Your goal is to develop novel approaches to neural reasoning starting from textually defined tasks in 50% of his/her time and to work on multimodal sarcasm detection in the other 50% time. You will be working with Prof. Alex Binder, Prof. Soujanya Poria, and Prof. Dorien Herremans.

The first task on neural reasoning will include work packages such as:

Research assistant in Music and AI

Our team at Singapore University of Technology and Design (SUTD) is looking for an RA in music and AI. You will be joining our team in music/audio/vision AI supervised by Prof. Dorien Herremans. More information on the music/audio team here. You will be working on a year project called ''aiMuVi: AI Music generated from Videos", which focuses on automatically generating music by specifying the emotion/tension throughout the musical piece so that it matches the emotion content extracted from videos. Related MIR topics may be explored. 

Tags: 

nnAudio, our on-the-fly GPU spectrogram extraction toolbox published in IEEE Access

Congratulations to Raven for publishing 'nnAudio: An on-the-fly GPU Audio to Spectrogram Conversion Toolbox Using 1D Convolutional Neural Networks', in IEEE Access. nnAudio allows you to calculate spectrograms (linear, log, Mel, CQT) on-the-fly as a layer in PyTorch. This makes the spectrograms finetunable to your task! nnAudio is easy to install with pip, see instructions at https://github.com/KinWaiCheuk/nnAudio

Congratulations Thao on passing your Preliminary exam on multimodal emotion prediction models

Thao Phuong, PhD student supervised by Prof. Gemma Roig and myself just passed her preliminary exam! Thao's work is on predicting valence and arousal from both video as well as audio. Her multimodal models have been published (and some more under review). You can read about them here.

Congrats to Abigail on finishing her PhD on deep learning for underwater communication

Abigail Leon has successfully defended her PhD today with main supervisor Prof. Yuen Chau and myself as co-supervisor. Abigail has successfully explored how deep learning techniques can be used to denoise en demodulate complex underwater acoustic communication signals, and has performed some sea-trials to gather data for this. Since the PhD is under an NDA with Thales, we cannot post it, however, check out some of Abigail's papers here (more to come after the review process ends).

Exclusive PhD studentships in AI and audio/music/finance/emotion

AI.SG is offering scholarships to top students who are interested in AI research. My lab at Singapore University of Technology and Design has positions for those interested in research on AI for audio/music/finance and NLP/emotion. More information about the lab and our projects: dorienherremans.com

Since this is an elite scholarship you can only apply if you have:
- high GPA
- Master's degree, or existing core ranked publications

PyTorch GPU based audio processing toolkit: nnAudio

Looking for a tool to extract spectrograms on the fly, integrated as a layer in PyTorch? Look no further than nnAudio, a toolbox developed by PhD student Raven (Cheuk Kin Wai): https://github.com/KinWaiCheuk/nnAudio

nnAudio is available in pip (pip install nnaudio), full documentation available on the github page. Also check out our dedicated paper:

New paper on perceptionGAN - real-world image construction through perceptual understanding

In a collaboration with IIT, India and SUTD, we've published a paper on our new perceptionGAN system in the Proceedings of the 4th Int. Conf. on Imaging, Vision and Pattern Recognition (IVPR), and 9th Int. Conf. on Informatics, Electronics & Vision (ICIEV). Read the preprint.

AMAAI MIR Webinars

SUTD's AMAAI lab is organizing online seminars of (worldwide) graduate students active in Music Information Retrieval (MIR), or more generally music/audio and AI. The aim is to connect different labs working on similar topics and enable international collaboration. Participating universities include SUTD, QMUL,...

The Webinars will be organized on Wednesdays at 4pm Singapore time (9am UK time - 10am EU time).

New jobs, new directions: The impact of leveraging AI in the music business

Over the last few years, there’s been a steady growth in revenue from digital music. In just six years, revenue from music streaming moved from zero to 40 percent of the overall global recorded music industry revenues, according to a report by IFPI. With revenues to the tune of 11.2 billion dollars a year, the digital model is only set to grow. So “is there still room for a traditional record company?”

Read more in the interview I gave to the SUTD Aspire newsletter about my recent keynote in TechHR.

Congrats to PhD student Jyun for publishing a paper in ICASSP

PhD student Yin-Jyun Luo got his paper on 'Singing voice conversion with disentangled representations of singer and vocal technique using variational autoencoders' accepted for the upcoming ICASSP conference in Barcelona Spain. You can read the preprint on Arxiv.

Job opening for PhD students and Game Developers at SUTD Game Lab

Our team at SUTD Game Lab, directed by Prof. Dorien Herremans, is looking for:

PhD students in Game Research with focus on AI or AR

Do you love gaming and want to make it your specialisation? We are a vibrant team at Singapore University of Technology and Design that create serious games for industry and academia. Our team consists of artists, game designers, and game developers. As a PhD student, you will work on your own research ideas, possibly combined with one of the team's projects, on topics such as:

Best student paper for multimodal emotion prediction paper

Phd student Thao Phuang's paper on "Multimodal Deep Models for Predicting Affective Responses Evoked by Movies" was awarded best student paper at the 2nd International Workshop on Computer Vision for Physiological Measurement as part of ICCV in Seoul, South Korea. The paper explores how models based on video and audio can predict emotion of movies:

New paper on Multimodal Deep Models for Predicting Affective Responses Evoked by Movies

Together with my PhD student Thao and Prof. Gemma Roig (MIT/Frankfurt University), a new paper was published on "Multimodal Deep Models for Predicting Affective Responses Evoked by Movies" in the Proceedings of the 2nd International Workshop on Computer Vision for Physiological Measurement as part of ICCV. Seoul, South Korea. 2019. A preprint is available here.

Pages