multimodal

AttendAffectNet: Self-Attention based Networks for Predicting Affective Responses from Movies

PhD student's Thao Phuong's paper on multimodal emotion prediction from movies/music is now available on Arxiv, together with the code. AttendAffectNet uses transformers with feature-based attention to attend to the most useful features at any given time to predict the valence/arousal.

Ha Thi Phuong Thao, Balamurali B.T., Dorien Herremans, Gemma Roig, 2020. AttendAffectNet: Self-Attention based Networks for Predicting Affective Responses from Movies. arXiv:2010.11188

Preprint paper.

New paper on Multimodal Deep Models for Predicting Affective Responses Evoked by Movies

Together with my PhD student Thao and Prof. Gemma Roig (MIT/Frankfurt University), a new paper was published on "Multimodal Deep Models for Predicting Affective Responses Evoked by Movies" in the Proceedings of the 2nd International Workshop on Computer Vision for Physiological Measurement as part of ICCV. Seoul, South Korea. 2019. A preprint is available here.