Our paper on PreBit - A multimodal model with Twitter FinBERT embeddings for extreme price movement prediction of Bitcoin just got published in Expert Systems with Applications.
- Twitter-based Bitcoin extreme movement predictions with PreBit
- Time-series momentum portfolios with deep multi-task learning
- DiffRoll - Music Transcription with Diffusion
- New paper on the emoMV datasets published in Information Fusion
- Keynote at AIMC
- New paper in Sensors on Single Image Video Prediction with Auto-Regressive GANs
- Bitcoin extreme price prediction with finBERT & Twitter
- Job opening: game developer (unity)
- Our cough models featured in NRF magazine
- Seminar on music and AI at KTH
Congratulations to Joel Ong on publishing our paper on using multi-task deep learning for porfolio construction in Expert Systems with Applications. The paper presents a new way to leverage time series momentum in a deep learning setting. Read a Twitter thread explaining the basics here.
Great work Cheuk Kin Wai on his latest paper on DiffRoll: Diffusion-based Generative Music Transcription with Unsupervised Pretraining Capability
Cheuk, K. W., Sawata, R., Uesaka, T., Murata, N., Takahashi, N., Takahashi, S., ... & Mitsufuji, Y. (2022). DiffRoll: Diffusion-based Generative Music Transcription with Unsupervised Pretraining Capability. arXiv preprint arXiv:2210.05148.
Congratulations to Thao on leading the publication of the EmoMV dataset set for music-video matching based on emotion!
Pham Q-H, Herremans D., Roig G.. 2022. EmoMV: Affective Music-Video Correspondence Learning Datasets for Classification and Retrieval. Information Fusion. DOI: 10.1016/j.inffus.2022.10.002
Congrats on my former research assistant Jiahui Huang on his latest paper in Sensors on 'Single Image Video Prediction with Auto-Regressive GANs'. Now we can generate videos of faces with desired emotions!
Huang, Jiahui, Yew Ken Chia, Samson Yu, Kevin Yee, Dennis Küster, Eva G. Krumhuber, Dorien Herremans, and Gemma Roig. "Single Image Video Prediction with Auto-Regressive GANs." Sensors 22, no. 9 (2022): 3533.
It was an honour today to be part of the seminar at the KTH Royal Institute of Technology in Stockholm as part of the dialogues series.
dialogues1: probing the future of creative technology
Subject: “Interaction with generative music frameworks”
Guests: Dorien Herremans and Kıvanç Tatar (Video link to be posted)
Dorien Herremans: Controllable deep music generation with emotion
Excited to be featured on the latest 'AI and You - What is AI? How will it affect your life, your work, and your world?' podcast by Peter Scott from Human Cusp.
We're focusing on AI in music: What's the state of the art in AI music composition, how can human composers use it to their advantage, and what is the AI Song Contest? How do musical AIs surprise their creators and how are they like your grandmother trying to explain death metal?
Last year I was honoured to be part of the panel discussion on 'Challenging the limits of AI for the next generation of co-creative tools - Frontiers of Music and Artificial Intelligence'. at Ars Electronica, IRCAM (FR). Watch the video below.
Congrats to Kin Wa Cheuk for his published paper in the ACM Multimedia conference (A*) on 'ReconVAT: A Semi-Supervised Automatic Music Transcription Framework for Low-Resource Real-World Data'. If you are interested in training low-data music transcription models with semi-supervised learning, check out the full paper here, or access the preprint.
Watch Raven's talk here:
Sounder Music in the Netherlands (https://soundersmusic.com/) has an internship opportunity for a MSc or PhD student in data analytics for music. The internship will be (remotely) co-supervised by myself (Prof. Dorien Herremans, SUTD) and the founder of Sounders Music (Willem Bloem).
Over the last few years, we developed Project PEAR at SUTD Game Lab. Project PEAR is a geolocation based augmented reality game that is aimed at educating the player on climate change as well as influence their behaviours. We just published a study in Sustainability on the effectiveness of this game.
Leading countless AI projects has left me very aware of all the challenges we may encounter during the development process. Therefore, I created a roadmap for AI managers and consultants to follow when creating an AI strategy, so they can better navigate the road to a successful AI strategy. The aiSTROM roadmap was just published in IEEE AccessRead the full article here.
Our team at Singapore University of Technology and Design (SUTD) is looking for an RA or postdoc in music and AI. You will be joining our AMAAI Lab in music/audio/vision AI supervised by Prof. Dorien Herremans. At our lab, we aim to advance the state-of-the-art in AI for music and audio. More information on the music/audio team here. We have multiple research lines going that need your expertise, either in symbolic music (midi) as well as audio.
I'm quite excited to announce the book chapter I wrote with Emeritus Prof. P. Kroonenberg from the University of Leiden. Prof. Kroonenberg just published an amazingly meticulous and interesting book on Multivariate Humanities, and I was happy to collaborate with him on the chapter "Musical stylometry: Characterisation of music" (pp. 347-370).
I'm excited to announce this internship opportunity with Sounder Music in the Netherlands. This internship in data analytics for music will be (remotely) co-supervised by myself (Prof. Dorien Herremans) and the founder of Sounders Music (Willem Bloem). Send Willem a message if you are interested with subject [Sounders internship] to willem [period] bloem [aat] noticesound.com. Ideally we can come to a research publication at the end of the project.