Accent Conversion in Text-To-Speech Using Multi-Level VAE and Adversarial Training

TitleAccent Conversion in Text-To-Speech Using Multi-Level VAE and Adversarial Training
Publication TypeConference Paper
Year of Publication2024
AuthorsMelechovsky J., Mehrish A., Sisman B., Herremans D.
Conference NamearXiv:2406.01018
Abstract

With rapid globalization, the need to build inclusive and representative speech technology cannot be overstated. Accent is an important aspect of speech that needs to be taken into consideration while building inclusive speech synthesizers. Inclusive speech technology aims to erase any biases towards specific groups, such as people of certain accent. We note that state-of-the-art Text-to-Speech (TTS) systems may currently not be suitable for all people, regardless of their background, as they are designed to generate high-quality voices without focusing on accent. In this paper, we propose a TTS model that utilizes a Multi-Level Variational Autoencoder with adversarial learning to address accented speech synthesis and conversion in TTS, with a vision for more inclusive systems in the future. We evaluate the performance through both objective metrics and subjective listening tests. The results show an improvement in accent conversion ability compared to the baseline.

URLhttps://arxiv.org/abs/2406.01018