Upcoming talks
We are happy to announce two talks on Tuesday 17 October at 2pm at SUTD I3 Lab 1.605
Title : Exploring NLP Methods in Symbolic MIR: Representations and Models
Abstract :
A current trend in MIR studies is to adapt Natural Language Processing (NLP) methods to music data: a striking observation is the increasing number of released MIR articles involving Transformers over the past few years.
Indeed, these approaches employing Transformers have established themselves as the current state-of-the-art in numerous NLP tasks and have also surpassed previous state-of-the-art models in symbolic MIR. However, it is essential to consider the extent to which this approach is justified in music. How can we gain a deeper understanding of the factors contributing to its enhanced performance?
In this talk, I will present ongoing works regarding two facets of applying NLP to symbolic music. A first part will be dedicated to the representation of symbolic music, with a particular emphasis on how tokenization expressiveness can be improved. A second part will then focus on models and more specifically Transformers-based models, in particular, their explainability on a specific task, harmonic analysis.
Bio :
After graduating from an aeronautical school (ISAE-SUPAERO, France), I shifted towards computer science applied to music with a master degree at IRCAM (France), followed by an internship at Université de Lille on modelling orchestral texture (publication at DLfM 2022). I am now starting my second year as a PhD student at Université de Lille, working on Natural Language Processing methods applied to symbolic music.
Talk by Jingwei Zhao
Title: Symbolic Music Accompaniment Arrangement via Sequential Style Transfer with Prior
Abstract: Composition style transfer is a popular technique for conditional generation problems in music automation. Via disentanglement and manipulation of content and style, it provides an interpretable and controllable pathway to creating theme variation, re-harmonization, and rearrangement of a piece of music. However, existing content-style disentanglement models only deal with short clips at the length of a few bars, resulting in a patchwork-like arrangement when concatenated. In this talk, we present a novel idea – sequential style transfer with prior – to bridge this research gap. Our focus centres on the specific task of accompaniment arrangement (conditional on an input melody with chords), beginning with AccoMontage, a piano arranger that leverages chord-texture disentanglement and a primitive, rule-based style prior to maintain a long-term texture structure. Subsequently, we introduce Q&A-XL, a multi-track orchestrator with a comprehensive latent style prior, which characterizes the global structure of orchestration style. From end to end, the complete system is named AccoMontage-3, which is capable of generating full-band accompaniment for whole pieces of music, with cohesive multi-track arrangement and coherent long-term structure.
Bio: Jingwei Zhao is a 3rd-year PhD student from NUS Sound and Music Computing Lab supervised by Prof Ye Wang. Before joining NUS, he worked as a research intern in NYU Shanghai supervised by Prof Gus Xia. His research focuses on music representation learning, music generation, and co-creation with an emphasis on controllability. He is also an accordion player.