We invite the broader NLP community to participate in our Shared Task at The 13th Argument Mining and Reasoning Workshop, co-hosted with ACL2026 in San Diego 🏖️ , United States!
https://argmining-org.github.io/2026/index.html#shared_task
The shared task focuses on understanding argumentative structure in highly formal, legal-political United Nations resolutions. Participants are expected to build LLM-based systems to: 1) identify and classify argumentative paragraphs in preambles and operative sections; 2) predict argumentative relations between paragraphs.
📅 Important Dates
1 Feb: Train and test data release
18 March: Evaluation and submission starts
1 April: Submission ends
15 April: Evaluation ends; results notification
24 April: Paper submission due
1 May: Reviews to authors
12 May: Camera-ready version due
July: ArgMining 2026 Workshop
🔗 Further details, data access, and submission instructions on the shared task page: https://shared-task-argmining.linguistik.uzh.ch/
Organizers: Yingqiang Gao, Anastassia Shaitarova, Reto Gubelmann, Patrick Montjouridès, Department of Computational Linguistics, University of Zurich (UZH)
We welcome participation from researchers and practitioners across related areas of argument mining, LLM reasoning, information retrieval, and so on. We are looking forward to receiving your submission!
Dear colleagues,
We are excited to extend our invitation for submission of *abstracts* for
the Industry Day at LREC 2026.
As many of you know, the Industry Day is designed to highlight real-world
NLP applications, practices and lessons learned, offering a space for
organisations to share practical insights, case studies, and perspectives
that can help to bridge the gaps between academic research and industry. In
particular, this year's Industry Day is also intended to act as a
networking platform for conference participants, experts, and professionals
to foster meaningful collaborations.
We particularly welcome abstracts addressing topics such as:
-
Applied NLP systems and large-scale deployments
-
Enterprise use cases and industry challenges
-
Evaluation, scalability, and reliability in production environments
-
Responsible AI, governance, and compliance in NLP
-
Emerging trends and future directions from an industry perspective
-
Speech Resources and Processing
- Less-Resourced/Endangered/Less-studied Languages
Submitted abstracts should emphasise practical experience, impact, and
insights rather than purely theoretical/empirical contributions.
Submission details
-
Abstract length: 150-200 words
-
Submission deadline: February 16th 2026
-
Notification of acceptance: March 13th 2026
-
Presentation format: oral
Accepted contributions will be featured as part of the Industry Day program
and presented to a diverse audience of students, industry leaders and
experts. Only an abstract submission is required for a presentation
proposal (not a full paper) and the submissions differ from LREC main
submissions in that they will *not be included* in the conference
proceedings or ACL Anthology.
We would be delighted to include your perspective and encourage you to
share this invitation with relevant colleagues.
For submission guidelines and a broader list of LREC 2026 topics, please
visit: https://lrec2026.info/call-for-industry-day-talks/
Looking forward to seeing you in Mallorca in May!
Teresa Lynn & Natalie Schluter
Co-Chairs, LREC 2026 Industry Day
Dear all,
We are delighted to offer a fully funded PhD studentship starting in October 2026, open to UK and international applicants.
We welcome applications from outstanding candidates with experience in corpus linguistics, language testing, or quantitative applied linguistics, including skills in e.g. corpus design, statistical analysis, and tools such as R, Python or #LancsBox X.
Delivered in collaboration with Trinity College London, the PhD will explore communicative competence in English language tests, with implications for fairness, accessibility and social mobility. The successful candidate will be based in Lancaster and work regularly on campus as part of the CASS team.
Further details and application information are available here:
https://cass.lancs.ac.uk/fully-funded-phd-opportunity-at-the-cass-research-…
Please feel free to share this opportunity.
Best,
Vaclav
Professor Vaclav Brezina
Professor in Corpus Linguistics
Co-Director of the ESRC Centre for Corpus Approaches to Social Science
Faculty of Humanities, Arts and Social Sciences, Lancaster University
Lancaster, LA1 4YD
Office: County South, room B46
T: +44 (0)1524 510828
@vaclavbrezina
================================================
Transactions on Graph Data & Knowledge (TGDK)
https://www.dagstuhl.de/tgdk
Special Issue: Neuro-Symbolic Modeling for Human-Centric AI
https://www.dagstuhl.de/en/institute/news/2026/tgdk-cfp-special-issue-neuro…
Submissions due: June 30th, 2026
================================================
In recent years, the alignment of Artificial Intelligence technologies
with people’s behaviors and worldviews has become a central topic for
several sectors of Computer Science. The pervasive diffusion of Large
Language Models (LLM) inside and outside the academic sector requires
important efforts to ensure fairness and representativity towards all
social and cultural groups, potentially considering different identities
that characterize potential end-users of these technologies.
This special issue welcomes contributions on the development of
graph-based abstractions and implementations of graph-based approaches
for human-centered AI. It welcomes hybrid neuro-symbolic and graph-based
approaches focused on knowledge reasoning for learning, and learning
approaches for reasoning, as well as the design and curation of
graph-based data and semantic models to explore the inclusion and
representation of human identities in AI systems.
== Scope ==
This special issue solicits submissions of research, resource and survey
articles that conform to the scope of TGDK on the following specific topics:
Ontology modeling and knowledge representation for Human-Centric AI
* Knowledge representation for reducing bias in AI
* Ontologies of identity dimensions and psychology for AI
* Ontologies of sociological and communication theories for AI
* Linked Data approaches for Human-Centric AI
Data quality, integration and provenance for Human-Centric AI
* FAIR and CARE principles for AI models
* Graph-based provenance approaches for AI models
* Incorporating cultural metadata into AI workflows
* KG-driven approaches for bias detection and mitigation in archives
LLM integration with graph-structured knowledge for the design of fair
AI technologies
* Question answering with LLMs and graph-structured knowledge
* Reducing LLM hallucinations with graph-structured knowledge
* Injecting graph-structured knowledge into LLMs
* Retrieval-Augmented Generation using graph-structured knowledge
* Enhancing graph-structured knowledge using LLMs
Logic and reasoning for Explainable AI
* Logic-based methods for governance of AI
* Logic-based methods for ethical AI frameworks
* Logic-based methods for legal compliance of AI
* Extraction of logic-based representations for explainable AI
* Graph-based constraint languages for explainable AI
== Guest Editors ==
* Stefano De Giorgis, Vrije Universiteit Amsterdam, Netherlands
* Marco Antonio Stranisci, University of Turin, Italy
* Luana Bulla, University of Bologna, Italy
* Lia Draetta, University of Turin, Italy
* Rossana Damiano, University of Turin, Italy
* Filip Ilievski, Vrije Universiteit Amsterdam, Netherlands
== Timeline ==
* Submissions: June 30, 2026
* Author Notifications: September 30, 2026
* Revisions: October 31, 2026
* Author Notifications: November 30, 2026
* Publication: Q4 2026 / Q1 2027
== Submission ==
Please follow the the submission instructions for TGDK and select the
corresponding Special Issue:
https://drops.dagstuhl.de/entities/journal/TGDK#author
As a Diamond Open Access journal, official versions of accepted papers
(as accessible via DOI) are published and made available for free online
*without fees for authors or readers*.
Marco,
UNITO <https://www.unito.it/persone/mstranis> and aequa-tech
<https://aequa-tech.com/>
"Aoki è sboccato e ancora inesperto, ma dentro di sé nasconde una
sensibilità delicata e gentile. È questo ciò che mi comunicano le sue
storie. Hayashi, sono certo che lei riuscirà a illuminare il suo cammino"
Taiyo Matsumoto
CFP: LT4HALA 2026 - The Fourth Workshop on Language Technologies for Historical and Ancient Languages
* Website: https://circse.github.io/LT4HALA/2026/
* Date: 11 May 2026
* Place: co-located with LREC 2026, 11-16 May 2026, Palma, Mallorca (Spain)
* Submission page: https://softconf.com/lrec2026/LT4HALA2026/
* Submission deadline: 17 February 2026
DESCRIPTION
LT4HALA 2026 is a one-day workshop that seeks to bring together scholars who are developing and/or are using Language Technologies (LTs) for historically attested languages, so to foster cross-fertilization between the Computational Linguistics community and the areas in the Humanities dealing with historical linguistic data, e.g. historians, philologists, linguists, archaeologists and literary scholars. LT4HALA 2026 follows LT4HALA 2020, 2022, 2024 that were organized in the context of LREC 2020, LREC 2022 and LREC-COLING 2024, respectively. Despite the current availability of large collections of digitized texts written in historical languages, such interdisciplinary collaboration is still hampered by the limited availability of annotated linguistic resources for most of the historical languages. Creating such resources is a challenge and an obligation for LTs, both to support historical linguistic research with the most updated technologies and to preserve those precious linguistic data that survived from past times.
Relevant topics for the workshop include, but are not limited to:
* creation and annotation of linguistic resources (both lexical and textual);
* the role of digital infrastructures, such as CLARIN<https://www.clarin.eu/>, in supporting research based on language resources for historical and ancient languages;
* handling spelling variation;
* detection and correction of OCR errors;
* deciphering;
* morphological/syntactic/semantic analysis of textual data;
* adaptation of tools to address diachronic/diatopic/diastratic variation in texts;
* teaching ancient languages with LTs;
* NLP-driven theoretical studies in historical linguistics;
* NLP-driven analysis of literary ancient texts;
* evaluation of LTs designed for historical and ancient languages;
* LLMs for the automatic analysis of ancient texts.
SHARED TASKS
LT4HALA 2026 will host:
* the 4th edition of EvaLatin<https://circse.github.io/LT4HALA/2026/EvaLatin>, a campaign entirely devoted to the evaluation of NLP tools for Latin. This new edition will focus on two tasks: dependency parsing and Named Entity Recognition. Dependency parsing will be based on the Universal Dependencies framework.
* the 5th edition of EvaHan<https://circse.github.io/LT4HALA/2026/EvaHan>, the campaign for the evaluation of NLP tools for Ancient Chinese. EvaHan 2026 will focus on Ancient Chinese OCR (Optical Character Recognition) Evaluation.
* the 2nd edition of EvaCun<https://circse.github.io/LT4HALA/2026/EvaCun>, the campaign for the evaluation of Ancient Cuneiform Languages, with shared tasks on transliteration normalization, morphological analysis and lemmatization, Named Entity Recognition of Akkadian and/or Sumerian.
SUBMISSIONS
Submissions should be 4 to 8 pages in length and follow the LREC stylesheet (see below). The maximum number of pages excludes potential Ethics Statements and discussion on Limitations, acknowledgements and references, as well as data and code availability statements. Appendices or supplementary material are not permitted during the initial submission phase, as papers should be self-contained and reviewable on their own.
Papers must be of original, previously unpublished work. Papers must be anonymized to support double-blind reviewing. Submissions thus must not include authors’ names and affiliations. The submissions should also avoid links to non-anonymized repositories: the code should be either submitted as supplementary material in the final version of the paper, or as a link to an anonymized repository (e.g., Anonymous GitHub or Anonym Share). Papers that do not conform to these requirements will be rejected without review.
Submissions should follow the LREC stylesheet, which is available on the LREC 2026 website on the Author’s kit page<https://lrec2026.info/authors-kit/>.
Each paper will be reviewed by three independent reviewers.
Accepted papers will appear in the workshop proceedings, which include both oral and poster papers in the same format. Determination of the presentation format (oral vs. poster) is based solely on an assessment of the optimal method of communication (more or less interactive), given the paper content.
As for the shared tasks, participants will be required to submit a technical report for each task (with all the related sub-tasks) they took part in. Technical reports will be included in the proceedings as short papers: the maximum length is 4 pages (excluding references) and they should follow the LREC 2026 official format. Reports will receive a light review (we will check for the correctness of the format, the exactness of results and ranking, and overall exposition). All participants will have the possibility to present their results at the workshop. Reports of the shared tasks are not anonymous.
WORKSHOP IMPORTANT DATES
* 17 February 2026: submissions due
* 13 March 2026: reviews due
* 16 March 2026: notifications to authors
* 27 March 2026: camera-ready due
Shared tasks deadlines are available in the specific web pages: EvaLatin, EvaHan, EvaCun.
Identify, Describe and Share your LRs!
When submitting a paper from the START page, authors will be asked to provide essential information about resources (in a broad sense, i.e. also technologies, standards, evaluation kits, etc.) that have been used for the work described in the paper or are a new result of your research. Moreover, ELRA encourages all LREC authors to share the described LRs (data, tools, services, etc.) to enable their reuse and replicability of experiments (including evaluation ones).
[http://static.unicatt.it/ext-portale/5xmille_firma_mail_2023.jpg] <https://www.unicatt.it/uc/5xmille>
*Homophobia and Transphobia Meme Classification | LT-EDI @ ACL 2026*
We are pleased to invite the research community to participate in the
LT-EDI @ ACL 2026 shared task on Homophobia and Transphobia Meme
Classification, which addresses harmful multimodal content targeting LGBTQ+
individuals and communities.
Memes function as compact multimodal communication units that combine
visual and textual cues. They spread rapidly across cultures and languages.
This combination enables both subtle and explicit forms of discrimination.
The shared task focuses on the automatic identification of homophobic and
transphobic content in memes.
* 📝 Task Description*The shared task focuses on multiclass meme
classification for detecting anti-LGBT content. Participants are provided
with multimodal memes and are required to classify each meme into one of
the predefined categories based on the presence of discriminatory content.
*Labels:*Homophobia
Transphobia
Non-anti-LGBT
*Languages:*English, Hindi, and Chinese
*Description:*Separate datasets are released for each language, enabling
analysis across culturally distinct meme collections. The task requires
participants to identify discriminatory stereotypes, harmful visual
elements, and derogatory textual cues embedded in memes. All training and
test datasets are developed following culturally sensitive and ethical
annotation practices. The task emphasizes robust multimodal understanding
across diverse cultural contexts.
* 📚 Resources*
🔗 Competition link (Codabench):
https://www.codabench.org/competitions/11335/
🔗 Task website: https://sites.google.com/view/lt-edi-2026/shared-tasks
*🗓️ Important Dates*Task announcement: November 16, 2025
Training data release: November 25, 2025
Test data release: January 20, 2026
Run submission deadline: February 10, 2026
Results announcement: February 16, 2026
Paper submission deadline: March 5, 2026
Peer review notification: April 28, 2026
Camera-ready submission: May 12, 2026
Workshop dates: July 2–3, 2026
with regards,
Dr. Bharathi Raja Chakravarthi,
Assistant Professor / Lecturer-above-the-bar
Programme Director (MSc Computer Science - Artificial Intelligence)
<https://www.universityofgalway.ie/courses/taught-postgraduate-courses/compu…>
School of Computer Science, University of Galway, Ireland
Insight SFI Research Centre for Data Analytics, Data Science Institute,
University of Galway, Ireland
E-mail: bharathiraja.akr(a)gmail.com , bharathi.raja(a)universityofgalway.ie
<bharathiraja.asokachakravarthi(a)universityofgalway.ie>
Google Scholar: https://scholar.google.com/citations?user=irCl028AAAAJ&hl=en
Website:
https://research.universityofgalway.ie/en/persons/bharathi-raja-asoka-chakr…
The next meeting of the Edge Hill Corpus Research Group will take place online (via MS Teams) on Friday 6 February 2026, 2:00-3:30 pm (GMT<https://time.is/United_Kingdom>).
Topic: Discourse Oriented Corpus Studies
Speaker: Dan Malone<https://www.researchgate.net/profile/Daniel-Malone> (Edge Hill University, UK)
Title: From Global Uncertainty to Domestic Danger: The lone wolf terrorist as a topos of threat in (poly)crisis discourses
The abstract and registration link are here: https://sites.edgehill.ac.uk/crg/next
Attendance is free. Registration closes on Wednesday 4 February.
If you have problems registering, or have any questions, please email the organiser, Costas Gabrielatos (gabrielc(a)edgehill.ac.uk<mailto:gabrielc@edgehill.ac.uk>).
________________________________
Edge Hill University<http://ehu.ac.uk/home/emailfooter>
Modern University of the Year, The Times and Sunday Times Good University Guide 2022<http://ehu.ac.uk/tef/emailfooter>
University of the Year, Educate North 2021/21
________________________________
This message is private and confidential. If you have received this message in error, please notify the sender and remove it from your system. Any views or opinions presented are solely those of the author and do not necessarily represent those of Edge Hill or associated companies. Edge Hill University may monitor email traffic data and also the content of email for the purposes of security and business communications during staff absence.<http://ehu.ac.uk/itspolicies/emailfooter>
*** First Call for Replication and Negative Results ***
37th IEEE International Symposium on Software Reliability Engineering
(ISSRE 2026)
October 20-23, 2026, 5* St. Raphael Resort and Marina
Limassol, Cyprus
https://cyprusconferences.org/issre2026/
The Replications and Negative Results (RENE) Track has been introduced in the software
engineering community for a while and received overwhelmingly positive feedback. This
year, we establish this track at ISSRE and invite researchers to (1) replicate results from
previous papers and (2) publish studies with important and relevant negative or null
results (results that fail to show an effect, yet demonstrate the research paths that did not
pay off).
We also encourage the publication of the negative results or replicable aspects of
previously published work. For example, authors of a published paper reporting a working
solution for a given problem can document in a “negative results paper” other (failed)
attempts they made before defining the working solution they published.
• Replication studies. The papers in this category must go beyond simply re-
implementing an algorithm and/or re-running the artifacts provided by the original paper.
Such submissions should at least apply the approach to new data sets (open-source or
proprietary). A replication study should clearly report on results that the authors were
able to replicate, as well as on the aspects of the work that were not replicable.
• Negative results papers. We seek papers that report on negative results. We seek
negative results for all types of program comprehension research in any empirical area
(qualitative, quantitative, case study, experiment, etc.). For example, did your controlled
experiment not show an improvement over the baseline? Even if negative, results obtained
are still valuable when they are either not obvious or disprove widely accepted wisdom.
Evaluation Criteria
Both Replication Studies and Negative Results submissions will be evaluated according to
the following standards:
• Depth and breadth of the empirical studies
• Clarity of writing
• Appropriateness of conclusions
• Amount of useful, actionable insights
• Availability of artifacts
• Underlying methodological rigor. A negative result due primarily to misaligned
expectations or due to lack of statistical power (small samples) is not a good submission.
The negative result should be a result of a lack of effect, not a lack of methodological
rigor.
Most importantly, we expect replication studies to clearly point out the artifacts upon
which the study is built, and to provide the links to all the artifacts in the submission (the
only exception will be given to those papers that replicate the results on proprietary
datasets that can not be publicly released).
Submission Instructions
Submissions must be original, in the sense that the findings and writing have not been
previously published or under consideration elsewhere. However, as either replication
studies or negative results, some overlap with previous work is expected. Please make
clear in the paper the overlap with and difference from previous work.
All submissions must be in PDF format and conform, at time of submission, to the IEEE
Computer Society Format Guidelines:
(https://www.ieee.org/conferences/publishing/templates).
Authors are strongly encouraged to print the PDF and review it for integrity (fonts,
symbols, equations, etc.) before submission, as defective printing can undermine a
paper’s chance of success. By submitting to the ISSRE RENE Track, authors acknowledge
that they are aware of and agree to be bound by the IEEE Plagiarism FAQ. In particular,
papers submitted to the RENE track must not have been published elsewhere and must not
be under review or submitted for review elsewhere whilst under consideration for ISSRE
2026. Contravention of this concurrent submission policy will be deemed a serious breach
of scientific ethics, and appropriate action will be taken in all such cases. To check for
double submission and plagiarism issues, the chairs reserve the right to (1) share the list
of submissions with the PC Chairs of other conferences with overlapping review periods
and (2) use external plagiarism detection software, under contract to the IEEE, to detect
violations of these policies.
Submissions to the RENE Track can be made via the ISSRE RENE track submission site:
https://easychair.org/conferences?conf=issre2026 .
Submission Length: The ISSRE RENE Track accepts submissions of two lengths:
(1) New replication studies and new descriptions of negative results should have a length
of up to 10 pages, plus 2 pages which may only contain references.
(2) Negative results documented during the preparation of previously published work by
the authors should be described in up to 5 pages, plus 1 page, which may only contain
references (e.g., as previously mentioned, authors of a published paper can document
negative results they obtained while working on it, such as methodologically sound
solutions that did not work).
Important note 1: Both types of papers (replication and negative results) will be included
as part of the main conference proceedings.
Important note 2: The RENE track does not follow a double-anonymous review process.
Publication and Presentation
Upon notification of acceptance, all authors of accepted papers will receive further
instructions for preparing the camera-ready versions of their submissions. If a submission
is accepted, at least one author of the paper is required to have a full registration for ISSRE
2026, attend the conference, and present the paper in person. All accepted papers will be
published in the conference electronic proceedings. The presentation is expected to be
delivered in person, unless this is impossible due to travel limitations (e.g., related to
health or visa). Details about the presentations will follow the notifications.
The official publication date is the date the proceedings are made available in the IEEE
Digital Libraries. The official publication date affects the deadline for any patent filings
related to published work.
Purchases of additional pages in the proceedings are not allowed.
Important Dates (AoE)
• Submission deadline: July 5, 2026
• Notification of acceptance: August12 29, 2026
• Camera-ready copy submission: August 19, 2026
• Author registration deadline: August 19, 2026
Organisation
General Chairs
• Leonardo Mariani, University of Milano - Bicocca, Italy
• George A. Papadopoulos, University of Cyprus, Cyprus
Program Coordinator
• Roberto Natella, GSSI, Italy
Research Program Committee Chairs
• Domenico Cotroneo, UNC Charlotte, USA
• Jie M. Zhang, King's College London, UK
Industry Program Chairs
• Jinyang Liu, Bytedance, USA
• Sigrid Eldh, Ericsson AB, Sweden
Workshop Chairs
• Georgia Kapitsaki, University of Cyprus, Cyprus
• August Shi, The University of Texas at Austin, USA
Doctoral Symposium Chairs
• Stefan Winter, LMU Munich, Germany
• Lili Wei, McGill University, Canada
Fast Abstract Chairs
• Luigi Lavazza, University of Insubria, Italy
• Yintong Huo, SMU, Singapore
JIC2 Chair
• Helene Waeselynck, LAAS-CNRS, France
Publicity Chairs
• Allison K. Sulivan, The University of Texas at Arlington, USA
• Jose D'Abruzzo Pereira, University of Coimbra, Portugal
Publication Chairs
• Sherlock Licorish, Otago Business School, New Zealand
• Maria Teresa Rossi, GSSI, Italy
Artifact Evaluation Chairs
• Naghmeh Ivaki, University of Coimbra, Portugal
• Fumio Machida, University of Tsukuba, Japan
Diversity and Inclusion Chair
• Eleni Constantinou, University of Cyprus, Cyprus
Financial Chair
• Costas Pattichis, University of Cyprus, Cyprus
Web Chairs
• Michalis Ioannides, Easy Conferences LTD
• Elena Masserini, University of Milano - Bicocca, Italy
Registration Chair
• Easy Conferences LTD
We invite submissions to PoliticalNLP 2026, the 3rd Workshop on Natural Language Processing for Political Sciences, co-located with LREC 2026. The workshop will take place in Palma de Mallorca, Spain, at the Palau de Congressos de Palma.
Theme for 2026
Trust, Transparency and Generative AI in Political Discourse Analysis
Large language models and generative AI are increasingly shaping political communication, public opinion, and democratic processes. PoliticalNLP 2026 provides an interdisciplinary forum to examine these developments critically and responsibly, at the intersection of NLP, political science, law, and the social sciences.
Topics of interest include, but are not limited to
• Trustworthy, explainable, and fair NLP for political data
• Bias, misinformation, and ethical risks of LLMs
• Multilingual and cross cultural political NLP
• Generative AI for policy analysis and deliberative democracy
• Reproducibility, transparency, and responsible AI practices
• Datasets, tools, and resources for political and civic technologies
Important dates
• Paper submission (long and short): 16 February 2026
• Notification: 11 March 2026
• Camera ready: 30 March 2026
• Workshop: 11 to 12 May 2026, or 16 May 2026 (final date to be confirmed by LREC)
Proceedings
Accepted papers will appear in the LREC 2026 Workshop Proceedings.
Submission and CFP
Full Call for Papers and details: https://sites.google.com/view/politicalnlp2026
Submission is electronic via the Softconf START system: https://softconf.com/lrec2026/PoliticalNLP2026/
Best regards,
PoliticalNLP 2026 Organizer
--
Wajdi Zaghouani, Ph.D.
Associate Professor in Residence,
Communication Program
Northwestern Qatar | Education City
T +974 4454 5232 | M +974 3345 4992
[cid:image001.png@01DB0DA7.8D0D9A20]
Second International Conference on Natural Language Processing
and Artificial Intelligence for Cyber Security
(NLPAICS'2026)
University of Alicante, Alicante, Spain
11 and 12 June 2026
https://nlpaics2026.gplsi.es/
Third Call for Papers
Recent advances in Natural Language Processing (NLP), Deep Learning and
Large Language Models (LLMs) have resulted in improved performance of
applications. In particular, there has been a growing interest in
employing AI methods in different Cyber Security applications.
In today's digital world, Cyber Security has emerged as a heightened
priority for both individual users and organisations. As the volume of
online information grows exponentially, traditional security approaches
often struggle to identify and prevent evolving security threats. The
inadequacy of conventional security frameworks highlights the need for
innovative solutions that can effectively navigate the complex digital
landscape to ensure robust security. NLP and AI in Cyber Security have
vast potential to significantly enhance threat detection and mitigation
by fostering the development of advanced security systems for autonomous
identification, assessment, and response to security threats in real
time. Recognising this challenge and the capabilities of NLP and AI
approaches to fortify Cyber Security systems, the Second International
Conference on Natural Language Processing (NLP) and Artificial
Intelligence (AI) for Cyber Security (NLPAICS'2026) continues the
tradition from NLPAICS'2024 to be a gathering place for researchers in
NLP and AI methods for Cyber Security. We invite contributions that
present the latest NLP and AI solutions for mitigating risks in
processing digital information.
Conference topics
The conference invites submissions on a broad range of topics related to
the employment of NLP and AI (and in general, language studies and
models) for Cyber Security, including but not limited to:
_Societal and Human Security and Safety_
* Content Legitimacy and Quality
* Detection and mitigation of hate speech and offensive language
* Fake news, deepfakes, misinformation and disinformation
* Detection of machine-generated language in multimodal context (text,
speech and gesture)
* Trust and credibility of online information
* User Security and Safety
* Cyberbullying and identification of internet offenders
* Monitoring extremist fora
* Suicide prevention
* Clickbait and scam detection
* Fake profile detection in online social networks
* Technical Measures and Solutions
* Social engineering identification, phishing detection
* NLP for risk assessment
* Controlled languages for safe messages
* Prevention of malicious use of ai models
* Forensic linguistics
* Human Factors in Cyber Security
_Speech Technology and Multimodal Investigations for Cyber Security_
* Voice-based security: Analysis of voice recordings or transcripts
for security threats
* Detection of machine-generated language in multimodal context (text,
speech and gesture)
* NLP and biometrics in multimodal context
_Data and Software Security_
* Cryptography
* Digital forensics
* Malware detection, obfuscation
* Models for documentation
* NLP for data privacy and leakage prevention (DLP)
* Addressing dataset "poisoning" attacks
_Human-Centric Security and Support_
* Natural language understanding for chatbots: NLP-powered chatbots
for user support and security incident reporting
* User behaviour analysis: analysing user-generated text data (e.g.,
chat logs and emails) to detect insider threats or unusual behaviour
* Human supervision of technology for Cyber Security
_Anomaly Detection and Threat Intelligence_
* Text-Based Anomaly Detection
* Identification of unusual or suspicious patterns in logs, incident
reports or other textual data
* Detecting deviations from normal behaviour in system logs or network
traffic
* Threat Intelligence Analysis
* Processing and analysing threat intelligence reports, news, articles
and blogs on latest Cyber Security threats
* Extracting key information and indicators of compromise (IoCs) from
unstructured text
_Systems and Infrastructure Security_
* Systems Security
* Anti-reverse engineering for protecting privacy and anonymity
* Identification and mitigation of side-channel attacks
* Authentication and access control
* Enterprise-level mitigation
* NLP for software vulnerability detection
* Malware Detection through Code Analysis
* Analysing code and scripts for malware
* Detection using NLP to identify patterns indicative of malicious
code
_Financial Cyber Security_
* Financial fraud detection
* Financial risk detection
* Algorithmic trading security
* Secure online banking
* Risk management in finance
* Financial text analytics
_Ethics, Bias, and Legislation in Cyber Security_
* Ethical and Legal Issues
* Digital privacy and identity management
* The ethics of NLP and speech technology
* Explainability of NLP and speech technology tools
* Legislation against malicious use of AI
* Regulatory issues
* Bias and Security
* Bias in Large Language Models (LLMs)
* Bias in security related datasets and annotations
_Datasets and resources for Cyber Security Applications_
_Specialised Security Applications and Open Topics_
* Intelligence applications
* Emerging and innovative applications in Cyber Security
_Special Theme Track - Future of Cyber Security in the Era of LLMs and
Generative AI_
NLPAICS 2026 will feature a special theme track with the goal of
stimulating discussion around Large Language Models (LLMs), Generative
AI and ensuring their safety. The latest generation of LLMs, such as
ChatGPT, Gemini, DeepSeek, LLAMA and open-source alternatives, has
showcased remarkable advancements in text and image understanding and
generation. However, as we navigate through uncharted territory, it
becomes imperative to address the challenges associated with employing
these models in everyday tasks, focusing on aspects such as fairness,
ethics, and responsibility. The theme track invites studies on how to
ensure the safety of LLMs in various tasks and applications and what
this means for the future of the field. The possible topics of
discussion include (but are not limited to) the following:
* Detection of LLM-generated language in multimodal context (text,
speech and gesture)
* LLMs for forensic linguistics
* Bias in LLMs
* Safety benchmarks for LLMs
* Legislation against malicious use of LLMs
* Tools to evaluate safety in LLMs
* Methods to enhance the robustness of language models
Submissions and Publication
NLPAICS welcomes high-quality submissions in English, which can take two
forms:
* Regular long papers: These can be up to eight (8) pages long,
presenting substantial, original, completed, and unpublished work.
* Short (poster) papers: These can be up to four (4) pages long and
are suitable for describing small, focused contributions, ongoing
research, negative results, system demonstrations, etc. Short papers
will be presented as part of a poster session.
The conference will not consider and evaluate abstracts only.
Accepted papers, including both long and short papers, will be published
as e-proceedings with ISBN will be available online on the conference
website at the time of the conference and are expected to be uploaded
into the ACL Anthology.
To prepare your submission, please make sure to use the NLPAICS 2026
style files available here:
LaTeX: NLPAICS_2026_LaTeX.zip [1]
Overleaf: https://www.overleaf.com/read/sgwmrzbmjfhc#aeea77
Word:
https://nlpaics2026.gplsi.es/wp-content/uploads/2025/11/NLPAICS2026_Proceed…
Papers should be submitted through Softconf/START using the following
link: https://softconf.com/p/nlpaics2026/user/
The conference will feature a student workshop, and awards will be
offered to the authors of best papers.
Important dates
* Submissions due: 16 March 2026
* Reviewing process: 1 April - 30 April 2026
* Notification of acceptance: 5 May 2026
* Camera-ready due: 19 May 2026
* Conference camera-ready proceedings ready 1 June 2026
* Conference: 11-12 June 2026
Organisation
Conference Chairs
Ruslan Mitkov (University of Alicante)
Rafael Muñoz (University of Alicante)
Programme Committee Chairs
Elena Lloret (University of Alicante)
Tharindu Ranasinghe (Lancaster University)
Publication Chair
Ernesto Estevanell (University of Alicante)
Sponsorship Chair
Andres Montoyo (University of Alicante)
Student Workshop Chair
Salima Lamsiyah (University of Luxembourg)
Best Paper Award Chair
Saad Ezzini (King Fahd University of Petroleum & Minerals)
Publicity Chair
Beatriz Botella (University of Alicante)
Social Programme Chair
Alba Bonet (University of Alicante)
Venue
The Second International Conference on Natural Language Processing and
Artificial Intelligence for Cyber Security (NLPAICS'2026) will take
place at the University of Alicante and is organised by the University
of Alicante GPLSI research group.
Further information and contact details
The follow-up calls will list keynote speakers and members of the
programme committee once confirmed. The conference website is
https://nlpaics2026.gplsi.es/ and will be updated on a regular basis.
For further information, please email nlpaics2026(a)dlsi.ua.es
Registration will open in March 2026.
Links:
------
[1] http://summer-school.gplsi.es/NLPAICS_2026_LaTeX.zip