transcription

DiffRoll - Music Transcription with Diffusion

Great work Cheuk Kin Wai on his latest paper on DiffRoll: Diffusion-based Generative Music Transcription with Unsupervised Pretraining Capability

Cheuk, K. W., Sawata, R., Uesaka, T., Murata, N., Takahashi, N., Takahashi, S., ... & Mitsufuji, Y. (2022). DiffRoll: Diffusion-based Generative Music Transcription with Unsupervised Pretraining Capability. arXiv preprint arXiv:2210.05148.

Demo & Source code available here.

ReconVAT presented in ACM Multimedia

Congrats to Kin Wa Cheuk for his published paper in the ACM Multimedia conference (A*) on 'ReconVAT: A Semi-Supervised Automatic Music Transcription Framework for Low-Resource Real-World Data'. If you are interested in training low-data music transcription models with semi-supervised learning, check out the full paper here, or access the preprint.

Watch Raven's talk here:

The Effect of Spectrogram Reconstructions on Automatic Music Transcription

Congrats Kin Wai (Raven), on this interesting paper on leveraging spectrogram reconstruction for music transcription, which was accepted at the International Conference on Pattern Recognition (ICPR2020). Read the preprint here.

Cheuk K.W., Luo Y.J., Benetos E., Herremans D.. 2021. The Effect of Spectrogram Reconstructions on Automatic Music Transcription: An Alternative Approach to Improve Transcription Accuracy. Proceedings of the International Conference on Pattern Recognition (ICPR2020).