posted by organizer: mattshardlow2 || 141 views || tracked by 3 users: [display]

OMMM 2025 : Second CFP - Interdisciplinary Workshop on Observations of Misunderstood, Misguided and Malicious Use of Language Models

FacebookTwitterLinkedInGoogle

Link: https://ommm-workshop.github.io/2025/
 
When Sep 11, 2025 - Sep 13, 2025
Where Varna, Bulgaria
Submission Deadline Jul 15, 2025
Notification Due Aug 1, 2025
Final Version Due Aug 30, 2025
Categories    NLP   llms   security   nlp applications
 

Call For Papers

Call for Papers

We are pleased to invite submissions for the first Interdisciplinary Workshop on Observations of Misunderstood, Misguided and Malicious Use of Language Models (OMMM 2025). The workshop will be held with the RANLP 2025 conference in Varna, Bulgaria, on 11-13 September 2025.

Overview
The use of Large Language Models (LLMs) pervades scientific practices in multiple disciplines beyond the NLP/AI communities. Alongside benefits for productivity and discovery, widespread use often entails misuse due to misalignment of values, lack of knowledge, or, more rarely, malice. LLM misuse has the potential to cause real harm in a variety of settings.

Through this workshop, we aim to gather researchers interested in identifying and mitigating inappropriate and harmful uses of LLMs. These include misunderstood usages (e.g., misrepresentation of LLMs in the scientific literature); misguided usages (e.g., deployment of LLMs without adequate training or privacy safeguards); and malicious usages (e.g., generation of misinformation and plagiarism). Sample topics are listed below, but we welcome submissions on any domain related to the scope of the workshop.

Important Dates
Submission deadline [NEW]: 15 July 2025, at 23:59 Anywhere on Earth
Notification of acceptance: 01 August 2025
Camera-ready papers due: 30 August 2025
Workshop dates: September 11, 12, or 13, 2025

Submission Guidelines
Submissions will be accepted as short papers (4 pages) and as long papers (8 pages), plus additional pages for references. All submissions undergo a double-blind review, so they should not include any identifying information. Submissions should conform to the RANLP guidelines; for further information and templates, please see the RANLP submission guidelines.

We welcome submissions from diverse disciplines, including NLP and AI, psychology, HCI, and philosophy. We particularly encourage reports on negative results that provide interesting perspectives on relevant topics.

In-person presenters will be prioritised when selecting submissions to be presented at the workshop, but the workshop will take place in a hybrid format. Accepted papers will be included in the workshop proceedings in the ACL Anthology.

Papers should be submitted on the RANLP conference system.

For any questions, please contact the organisers at ommm-workshop@googlegroups.com
Workshop Topic and Content

The use of Large Language Models (LLMs) pervades scientific practices in multiple disciplines beyond the NLP/AI communities. Alongside benefits for productivity and discovery, widespread use often entails misuse due to misalignment of values, lack of knowledge, or, more rarely, malice. LLM misuse has the potential to cause real harm in a variety of settings.

Through this workshop, we aim to gather researchers interested in identifying and mitigating inappropriate and harmful uses of LLMs. We categorise the misuses of LLMs into three domains:

Misunderstood usages: Misrepresentation, improper explanation, or opaqueness of LLMs.
Misguided usages: Misapplication of LLMs where their utility is questionable or inappropriate.
Malicious usages: Use for misinformation, plagiarism, and adversarial attacks.

Topics include:

Misunderstood use (and how to improve understanding):
Misrepresentation of LLMs (e.g., anthropomorphic language)
Attribution of consciousness
Interpretability
Overreliance on LLMs
Misguided use (and how to find alternatives):
Underperformance and inappropriate applications
Structural limitations and ethical considerations
Deployment without proper training or safeguards
Malicious use (and how to mitigate it):
Adversarial attacks, jailbreaking
Detection and watermarking of machine-generated content
Generation of misinformation or plagiarism
Bias mitigation and trust design

Keynote Speaker

We are excited to have Dr. Stefania Druga as the keynote speaker for the inaugural OMMM workshop. Dr. Druga is a Research Scientist at Google DeepMind, where she designs novel multimodal AI applications. For more information, see her website.
Target Audience

We expect our workshop to appeal to an interdisciplinary group, including:

NLP and AI researchers focused on responsible LLM use
Psychologists exploring consciousness and perception in LLMs
HCI researchers studying interaction and trust with LLMs
Philosophers considering ethical questions

Organizers

Piotr Przybyła, Universitat Pompeu Fabra
Matthew Shardlow, Manchester Metropolitan University
Clara Colombatto, University of Waterloo
Nanna Inie, IT University of Copenhagen

Programme Committee

Alina Wróblewska (Polish Academy of Sciences)
Ashley Williams (Manchester Metropolitan University)
Azadeh Mohammadi (University of Salford)
Clara Colombatto (University of Waterloo)
Dariusz Kalociński (Polish Academy of Sciences)
Julia Struß (Fachhochschule Potsdam)
Lev Tankelevitch (Microsoft Research)
Leon Derczynski (NVIDIA)
Marcos Zampieri (George Mason University)
Matthew Shardlow (Manchester Metropolitan University)
Nael B. Abu-Ghazaleh (University of California, Riverside)
Nanna Inie (IT University of Copenhagen)
Nhung T. H. Nguyen (Johnson & Johnson)
Nishat Raihan (George Mason University)
Oluwaseun Ajao (Manchester Metropolitan University)
Peter Zukerman (University of Washington)
Piotr Przybyła (Universitat Pompeu Fabra)
Samuel Attwood (Manchester Metropolitan University)
Sergiu Nisioi (University of Bucharest)
Xia Cui (Manchester Metropolitan University)

Related Resources

Ei/Scopus-CCNML 2025   2025 5th International Conference on Communications, Networking and Machine Learning (CCNML 2025)
OMMM 2025   Interdisciplinary Workshop on Observations of Misunderstood, Misguided and Malicious Use of Language Models
Security 2025   Special Issue on Recent Advances in Security, Privacy, and Trust
APS 2026 Early American Conference 2026   CFP: America's 1776: Independence and its Enduring Legacies, June 4-6, 2026
DEPLING 2023   International Conference on Dependency Linguistics
SECAI 2025   Workshop on Security and Artificial Intelligence (SECAI) 2025
MLNLP 2025   6th International Conference on Machine Learning Techniques and NLP
Labyrinth 2025   CFA / CFP: Emmanuel Levinas (1906-1995) Special Issue
Natalism 2026   CFP: The Virtues and Vices of Having Children
CGASP 2025   International Conference on Computer Graphics, Animation & Signal Processing