��First Call for Papers: 4th Workshop on NLP for Music and Audio (NLP4MusA 2026)

Co-located with EACL 2026, Rabat, Morocco & Online | March 24–29, 2026

Website:
https://sites.google.com/view/nlp4musa-2026/home
Submission Page:
https://openreview.net/group?id=eacl.org/EACL/2026/Workshops/NLP4MusA

Shared Task: Conversational Music Recommendation Challenge (Music-CRS)
- Challenge information:
https://sites.google.com/view/nlp4musa-2026/shared-task
- Baselines:
https://github.com/nlp4musa/music-crs-baselines
- Evaluations:
https://github.com/nlp4musa/music-crs-evaluator

Contact: nlp4musa2026@gmail.com


== About the Workshop ==

Building on a tradition of cross-disciplinary impact, the intersection of NLP with music and audio-based creative media presents a frontier full of unique challenges and exciting opportunities. The Fourth Workshop on Natural Language Processing for Music and Audio (NLP4MusA) aims to explore the multimodal synergies between language, music, and sound. As NLP increasingly enables domains where language and interaction converge, the entertainment industry offers a particularly compelling case: most audio content - such as songs or podcasts - contains an inherent linguistic dimension, while user engagement often occurs through language, from search queries to social media conversations.

We welcome submissions on topics such as:

NLP for Music and Audio Understanding
- Music Tagging and Auto-tagging, Knowledge Graph Construction, Semantic Ontologies
- Information Extraction, Named Entity Recognition, and Entity Linking
- Multimodal Representation Learning, Lyrics and Symbolic Representation Analysis
- Emotion and Sentiment Analysis, Culture-specific Music Understanding, Corpora Bias
- Music Captioning and Description Generation


NLP for Music Retrieval or Recommendation
- Conversational Interfaces, Query understanding and Intent Prediction
- Multimodal, Cross-modal Music Information Retrieval and Recommender Systems
- Natural Language User Modeling
- Music Question Answering
- Fairness and Transparency

NLP for Music and Audio Generation
- Lyrics Generation, Audio/Symbolic Query-driven Music Generation
- Synthetic Music Content Detection


== Submission Instructions ==

We invite short papers of up to 4 pages (excluding references and appendices). Final versions will be given one additional page of content so that reviewers' comments can be taken into account. Accepted papers will be published in the workshop proceedings (ACL Anthology) and presented orally or as posters.

The review process will be double-blind. Submissions should adhere to the ACL Anthology formatting guidelines. A LaTeX template is available here (no Word templates is provided):
https://github.com/acl-org/acl-style-files

Shared tasks papers should be submitted as a 2-page report describing the solution, using the same  LaTeX template above (see specific instructions on the website). The best works will be selected for oral or poster presentations.


== Key Dates (tentative, AoE) ==

Direct Submission deadline: December 19, 2025
Notification of acceptance: January 23, 2026
Camera-ready paper due: February 3, 2026
Workshop dates: March 24-29, 2026

Shared Task: Important Dates

Shared task release: October 15, 2025
Submission site opens: December 1, 2025
Blind evaluation dataset release: December 1, 2025
Final submission deadline: December 19, 2025
Results notification: January 23, 2026


== Organizers ==

Elena V. Epure, Deezer
Sergio Oramas, SiriusXM
SeungHeon Doh, KAIST
Anna Kruspe, Munich University of Applied Sciences
Mohamed Sordo, SiriusXM