| |||||||||||
Insights 2022 : Workshop on Insights from Negative Results in NLP | |||||||||||
Link: https://insights-workshop.github.io/index | |||||||||||
| |||||||||||
Call For Papers | |||||||||||
Dear colleagues, New deadlines for submission, March 5th. Co-located with ACL, May 26 https://insights-workshop.github.io/index Contact email: insights-workshop-organizers@googlegroups.com Overview Publication of negative results is difficult in most fields, but in NLP the problem is exacerbated by the near-universal focus on improvements in benchmarks. This situation implicitly discourages hypothesis-driven research, and it turns creation and fine-tuning of NLP models into art rather than science. Furthermore, it increases the time, effort, and carbon emissions spent on developing and tuning models, as the researchers have no opportunity to learn what has already been tried and failed. This workshop invites both practical and theoretical unexpected or negative results that have important implications for future research, highlight methodological issues with existing approaches, and/or point out pervasive misunderstandings or bad practices. In particular, the most successful NLP models currently rely on different kinds of pretrained meaning representations (from word embeddings to Transformer-based models like BERT). To complement all the success stories, it would be insightful to see where and possibly why they fail. Any NLP tasks are welcome: sequence labeling, question answering, inference, dialogue, machine translation - you name it. Please continue reading in the call for paper page in our website: https://insights-workshop.github.io/2022/cfp/ Regards, Shabnam -- Shabnam Tafreshi, PhD Assistant Research Scientist Computational Linguistics, NLP UMD: ARLIS @ College Park "All the problems of the world could be settled easily, if people only willing to think." -Thomas J. Watson |
|