journal

PyTorch GPU based audio processing toolkit: nnAudio

Looking for a tool to extract spectrograms on the fly, integrated as a layer in PyTorch? Look no further than nnAudio, a toolbox developed by PhD student Raven (Cheuk Kin Wai): https://github.com/KinWaiCheuk/nnAudio

nnAudio is available in pip (pip install nnaudio), full documentation available on the github page. Also check out our dedicated paper:

New paper on Singing Voice Estimation in Neural Computing and Applications (Springer)

Together with Eward Lin, Enyan Koh, Dr. Balamurali BT from SUTD, and Dr. Simon Lui from Tencent Music (former SUTD) we published a paper on using an ideal binary mask with CNN for separating singing voice from its musical accompaniment:

Lin K.W.E., BT B, Koh E., Lui S., Herremans D.. In Press. Singing Voice Separation Using a Deep Convolutional Neural Network Trained by Ideal Binary Mask and Cross Entropy. Neural Computing and Applications. DOI: 10.1007/s00521-018-3933-z.

New Frontiers in Psychology paper on A Novel Graphical Interface for the Analysis of Music Practice Behaviors

The paper I wrote together with Janis Sokolovskis and Elaine Chew from QMUL, called A Novel Interface for the Graphical Analysis of Music Practice Behaviours was just poublished in Frontiers in Psychology - Human-Media Interaction. Read the full article here or download the pdf.

New article on the MoprheuS music generation system in IEEE Transactions of Affective Computng

I'm please to announce the latest article I wrote together with Prof. Elaine Chew on MorpheuS, published in IEEE Transactions on Affective Computing. The paper explains the inner workings of MorpheuS, a music generation system that is able to generate pieces with a fixed pattern structure and given tension.

Herremans D., Chew E.. 2017. MorpheuS: generating structured music with constrained patterns and tension. IEEE Transactions on Affective Computing. PP (In Press)(99)

New Survey article on music generation in ACM Computing Surveys

A new journal article was published that I wrote together with Prof. Ching-Hua Chuan and Prof. Elaine Chew. The article is a survey on current music generation systems from a functional point of view, thus creating a nice overview of current challenges and opportunities in the field. The articles covers systems ranging from game music, to real-time improvisation systems and emotional movie music generation systems.

Call for Papers: Special Issue on Deep Learning for Music and Audio in Springer’s Neural Computing and Applications

Special Issue on Deep Learning for Music and Audio
in Springer's Neural Computing and Applications (Impact factor: 2.50)

Submission deadline: December 17th

Description and covered topics