Congrats to Rui Guo, who was an intern at AMAAI Lab, SUTD, published a paper on 'A variational autoencoder for music generation controlled by tonal tension', which will be presented next week at 'The 2020 Joint Conference on AI Music Creativity'.
Our team at Singapore University of Technology and Design (SUTD) is looking for a postdoc fellow in automatic music generation. You will be joining our team in music/audio/vision AI supervised by Prof. Dorien Herremans, Prof. Gemma Roig and Prof. Kat Agres. More information on the music/audio team here.
Having done my postdoc and PhD on music generation (see MorpheuS project), I am happy to announce that I am the PI of a new MOE Tier 2 grant on 'aiMuVi: AI Music generated from Videos' with SGD 648,216 in funding. My co-PIs on this project are Prof. Gemma Roig from SUTD, Prof. Eran Gozy from MIT (creator of Guitar Hero), and collaborator Prof. Kat Agres from A*STAR/NUS.
Abstract of the proposal:
My research was featured in a documentary called 'Algorithms' by Channel News Asia. MorpheuS is a music generation algorithm developed by Prof. Elaine Chew and myself at Queen Mary University of London. In the documentary, a string quartet consisting of musicians from the Singapore Symphony Orchestra perform a piece composed by MorpheuS (at around 17min in), followed by an interview about my research (21min in).
I'll be giving an invited talk on Music and AI: Generating Music and More!, coming Friday at the One-North Festival. The one-north Festival is an annual celebration of research, innovation, creativity and enterprise. Jointly organised by the Agency for Science, Technology and Research (A*STAR) and JTC, and supported by Science Centre Singapore, this very exciting event welcomes the public to immerse in science and technology through fun interactive displays, inspiring talks, workshops and tours for the whole family.
I'm please to announce the latest article I wrote together with Prof. Elaine Chew on MorpheuS, published in IEEE Transactions on Affective Computing. The paper explains the inner workings of MorpheuS, a music generation system that is able to generate pieces with a fixed pattern structure and given tension.
Herremans D., Chew E.. 2017. MorpheuS: generating structured music with constrained patterns and tension. IEEE Transactions on Affective Computing. PP (In Press)(99)
A new journal article was published that I wrote together with Prof. Ching-Hua Chuan and Prof. Elaine Chew. The article is a survey on current music generation systems from a functional point of view, thus creating a nice overview of current challenges and opportunities in the field. The articles covers systems ranging from game music, to real-time improvisation systems and emotional movie music generation systems.
Last week I was in Singapore to present the MorpheuS research I have performed on my Marie-Curie grant at IEEE TENCON. The title of my talk was "MorpheuS: Automatic music generation with recurrent pattern constraints and tension profiles". I was also invited to give a seminar on "Machine learning and optimization applied to digital music" at the High Performance Computing Institute at A*STAR in Singapore. A short abstract of the talk at A*STAR:
On my way through Southern California, I was invited by Prof. Robert Keller, who works on the impro-visor, to give a talk at Harvey-Mudd College in Claremont, CA on July 23rd. The topic of the talk was focused on my Mary-Curie project (MorpheuS): "An automatic composition system for structured music based on optimisation and machine learning".