audio

Best student paper for multimodal emotion prediction paper

Phd student Thao Phuang's paper on "Multimodal Deep Models for Predicting Affective Responses Evoked by Movies" was awarded best student paper at the 2nd International Workshop on Computer Vision for Physiological Measurement as part of ICCV in Seoul, South Korea. The paper explores how models based on video and audio can predict emotion of movies:

New paper on Multimodal Deep Models for Predicting Affective Responses Evoked by Movies

Together with my PhD student Thao and Prof. Gemma Roig (MIT/Frankfurt University), a new paper was published on "Multimodal Deep Models for Predicting Affective Responses Evoked by Movies" in the Proceedings of the 2nd International Workshop on Computer Vision for Physiological Measurement as part of ICCV. Seoul, South Korea. 2019. A preprint is available here.

Grant from MIT-SUTD IDC on "An intelligent system for understanding and matching perceived emotion from video with music"

A few months ago, Prof. Gemma Roig (PI, SUTD), Prof. Dorien Herremans (co-PI, SUTD), Dr. Kat Agres (co-PI, A*STAR) and Dr. Eran Gozy (co-PI, MIT, creator of Guitar Hero) got awarded a new grant from the International Design Center (joint research institute of MIT and SUTD) for 'An intelligent system for understanding and matching perceived emotion from video with music'. This is an exiting opportunity and the birth of our new Affective Computing Lab at SUTD that links the computer vision lab and AMAAI lab.