GEM 2023 : Generation, Evaluation & Metrics
Call For Papers
The Third Version of the Generation, Evaluation & Metrics (GEM) Workshop will be held as part of EMNLP, December 6-10, 2023, Singapore.
The GEM workshop aims to encourage the development of model auditing & human evaluation strategies, and to popularize model evaluations in languages beyond English. We welcome submissions related, but not limited to, the following topics:
💎 Automatic evaluation of generation systems (example, example, example)
💎 Creating NLG corpora and challenge sets (example, example, example)
💎 Critiques of benchmarking efforts and responsibly measuring progress in NLG (example, example)
💎 Effective and/or efficient NLG methods that can be applied to a wide range of languages and/or scenarios (example, example, example)
💎 Application and evaluation of generation models interacting with external data and tools (example, example, example)
💎 Sociotechnical perspectives of employing large language models (example)
💎 Standardizing human evaluation and making it more robust (example, example, example)
If you are interested, you can check out last year's workshop websites from ACL 2021 and EMNLP 2022.
Industrial Track - Unleashing the Power of NLP: Bridging the Gap between Academia and Industry
GEM 2023 is proud to announce the launch of its Industrial Track, which aims to provide actionable insights to industry professionals and to foster collaborations between academia and industry.
We are organizing a shared task focused on multilingual summarization, including human and automatic evaluation. The Shared Task will be run "Backwards": the workshop will serve as a platform to pre-register your hypotheses. More info on how to participate to come!
Note: For any questions, please email email@example.com
Paper Submission Dates
📅 8 September 2023: Workshop paper submission deadline
📅 6 October 2023: Workshop paper notification deadline
📅 18 October 2023: Workshop paper camera ready deadline
📅 December 2023 EMNLP