Generative Modelling for Controllable Audio Synthesis of Expressive Piano Performance
|Title||Generative Modelling for Controllable Audio Synthesis of Expressive Piano Performance|
|Publication Type||Conference Proceedings|
|Year of Conference||2020|
|Authors||Tan HHao, Luo Y.J., Herremans D.|
|Conference Name||Workshop on Machine Learning for Music Discover (ML4MD) as part of ICML|
We present a controllable neural audio synthesizer based on Gaussian Mixture Variational Autoencoders (GM-VAE), which can generate realistic piano performances in the audio domain that closely follows temporal conditions of two essential style features for piano performances: articulation and dynamics. We demonstrate how the model is able to apply fine-grained style morphing over the course of synthesizing the audio. This is based on conditions which are latent variables that can be sampled from the prior or inferred from other pieces. One of the envisioned use cases is to inspire creative and brand new interpretations for existing pieces of piano music.