SNIPER Training: Variable Sparsity Rate Training For Text-To-Speech

TitleSNIPER Training: Variable Sparsity Rate Training For Text-To-Speech
Publication TypeConference Paper
Year of Publication2022
AuthorsLam P., Zhang H., Chen N.F, Sisman B., Herremans D.
Conference NameArxiv 2211.07283
Abstract

Text-to-speech (TTS) models have achieved remarkable naturalness in recent years, yet like most deep neural models, they have more parameters than necessary. Sparse TTS models can improve on dense models via pruning and extra retraining, or converge faster than dense models with some performance loss. Inspired by these results, we propose training TTS models using a decaying sparsity rate, i.e. a high initial sparsity to accelerate training first, followed by a progressive rate reduction to obtain better eventual performance. This decremental approach differs from current methods of incrementing sparsity to a desired target, which costs significantly more time than dense training. We call our method SNIPER training: Single-shot Initialization Pruning Evolving-Rate training. Our experiments on FastSpeech2 show that although we were only able to obtain better losses in the first few epochs before being overtaken by the baseline, the final SNIPER-trained models beat constant-sparsity models and pip dense models in performance.

URLhttps://arxiv.org/abs/2211.07283