Held in conjunction with the 2017 International Joint Conference on Neural Networks (IJCNN 2017))
There has been tremendous interest in deep learning across many fields of study. Recently, these techniques have gained popularity in the field of music. Projects such as Magenta (Google's Brain Team's music generation project), Jukedeck and others testify to their potential.
While humans can rely on their intuitive understanding of musical patterns and the relationships between them, it remains a challenging task for computers to capture and quantify musical structures. Recently, researchers have attempted to use deep learning models to learn features and relationships that allow us to accomplish tasks in music transcription, audio feature extraction, emotion recognition, music recommendation, and automated music generation.
With this workshop we aim to advance the state-of-the-art in machine intelligence for music by bringing together researchers in the field of music and deep learning. This will enable us to critically review and discuss cutting-edge-research so as to identify grand challenges, effective methodologies, and potential new applications.
Papers and abstracts on the application of deep learning techniques on music are welcomed, including but not limited to:
Bio Sageev Oore completed an undergraduate degree in Mathematics (Dalhousie), and MSc and PhD degrees in Computer Science (University of Toronto) working with Geoffrey Hinton. He studied piano with both classical and jazz teachers from schools including Dalhousie, Juilliard, UBC and York University (Toronto), and has performed as soloist with orchestras both as a classical pianist and as a jazz improviser. His academic research has spanned from minimally-supervised learning for robot localization to adaptive real-time control of 3D graphical models. Together with his brother Dani, he co-created a duo instrumental CD combining classical art songs with improvisation. Recently, Sageev’s long-standing interest in combining machine learning and music surpassed his long-standing resistance to that same topic. Sageev is a professor of computer science at Saint Mary’s University (Canada), and is currently a visiting research scientist on the Magenta team (led by Douglas Eck) at Google Brain, working on application of deep learning approaches to music-related data.
Bio Oriol Nieto, born in Barcelona in 1983, is a data scientist at Pandora. He obtained his Ph.D in Music Data Science from the Music and Audio Research Lab at NYU (New York, NY, USA) in 2015. He holds an M.A. in Music, Science and Technology from Stanford University (Stanford, CA, USA), an M.Sc in Information Technologies from Pompeu Fabra University (Barcelona, Spain), and a B.Sc. in Computer Science from Polytechnic University of Catalonia (Barcelona, Spain). His research focuses on topics such as music information retrieval, large scale recommendation systems, and machine learning with especial emphasis on deep architectures. He plays guitar, violin, and sings (and screams) in his spare time.
Bio Kat Agres received her PhD in Experimental Psychology from Cornell University in 2012. She also holds a bachelor's degree in Cognitive Psychology and Cello Performance from Carnegie Mellon University, and has received numerous grants to support her research, including a Fellowship from the National Institute of Health. She recently finished a postdoctoral research position at Queen Mary, University of London, where she was supported by a European Union Seventh Framework Programme grant investigating Computational Creativity. Her research explores a wide range of topics, including music cognition, computational models of music perception, auditory learning and memory, and computational creativity. She has presented her work at international workshops and conferences in over a dozen countries, and, in January 2017, Kat joined the A*STAR Institute of High Performance Computing (Social & Cognitive Computing Department) in Singapore to start a program of research focused on music cognition.
Workshop program
Download the technical program
Proceedings
Download the Proceedings
Registration
Please register for the workshop at the main conference website.
Submissions of Papers
Papers of up to 5 pages using the following template are welcomed for a talk. Submissions will be evaluated according to their originality, technical soundness and relevance to the workshop. The guidelines in of the workshop’s latex template should be followed. Contributions should be in PDF format and submitted to d.herremans@qmul.ac.uk with the subject line: [DLM17 paper submission]. Submissions do not need to be anonymized. Papers will be peer-reviewed and published in the proceedings of the workshop.
Submissions of Abstracts
Structured abstracts of max 2 pages can be submitted for a shorter talk. The abstracts should follow the same template as the papers and will be included in the proceedings. Abstracts should be in PDF format and submitted to d.herremans@qmul.ac.uk with the subject line: [DLM17 abstract submission]. Abstracts will be peer-reviewed and included in the proceedings of the workshop.
Special Issue in Journal
Authors will be invited to submit a full paper version of their extended abstract for a special issue on deep learning for music and audio in Springer's Neural Computing Applications.
Programme Committee
Dorien Herremans (Queen Mary University of London, UK)
Ching-Hua Chuan (University of North-Florida, US)
Louis Bigo (Universite Lille 3, France)
Maarten Grachten (Austrian Research Institute for Artificial Intelligence, Austria)
Sebastian Stober (University of Potsdam, Germany)
Important Dates
Paper Submission Deadline: March 12th
Acceptance Notification: April 1 (due to extended dealine)
Final versions due: April 23, 2017
Workshop Date: May 18-19, 2017
Registration
Workshop registration will be handled by the main conference, please check IJCNN for more details.
Dorien Herremans is currently a Marie sklodowska-Curie Postdoctoral Fellow at the Centre for Digital Music at Queen Mary University of London. She is currently working on the project: ``MorpheuS: Hybrid Machine Learning – Optimization techniques To Generate Structured Music Through Morphing And Fusion''. She received her Ph.D. on the topic of Computer Generation and Classification of Music through Operations Research Methods. She graduated as a commercial engineer in management information systems at the University of Antwerp in 2005. After that, she worked as a Drupal consultant and was an IT lecturer at the Les Roches University in Bluche, Switzerland. She also worked as a mandaatassistent at the University of Antwerp, in the domain of operations managment, supply chain management and operations research. Dr. Herremans' research interests include machine learning and music for automatic music generation, data mining for music classification (hit prediction) and novel applications in the intersections of machine learning/optimization and music. Dr. Herremans is a member of IEEE society and in the organizing committe of ORBEL26 (Conference of the Belgian Operations Research Society). She also serves on the program committee member of the International Society of Music Information Retrieval, the International Conference on Principle and Practice of Constraint Programming, the International Conference on Mathematics and Music and the International Workshop on Music and Machine Learning (part of ECML/PKDD).
Ching-Hua Chuan is an associate professor of computing at University of North Florida. She received her Ph.D. in computer science from University of Southern California (Los Angeles, CA, USA) Viterbi School of Engineering, and her B.S. and M.S. degrees in electrical engineering from the National Taiwan University. Dr. Chuan’s research interests include audio signal processing, music information retrieval, artificial intelligence and machine learning. She has published refereed articles in journals and conferences on audio content analysis, style-specific music generation, machine learning applications, and music and multimedia information retrieval. She was the recipient of the best new investigator paper award at the Grace Hopper Celebration of Women in Computing in 2010. Dr. Chuan has served on the program committees of International Society of Music Information Retrieval, International Conference on Mathematics and Computation in Music, International Conference on New Interfaces for Musical Expression, IEEE International Workshop on Multimedia Information Processing and Retrieval, International ACM Workshop on Music Information Retrieval with User-Centred and Multimodal Strategies, and the scientific advisory committee for Music Similarity workshop at Lorentz center. She is also the founder of Women in Music Information Retrieval (WiMIR) and the co-director of Florida regional Botball educational robotics program.
You can contact the organizers through d [dot] herremans [a] qmul.ac.uk.