posted by organizer: bross || 5658 views || tracked by 3 users: [display]

NEATCLasS 2022 : 1st Workshop on Novel Evaluation Approaches for Text Classification Systems on Social Media


When Jun 6, 2022 - Jun 6, 2022
Where Atlanta, Georgia, US
Submission Deadline Apr 10, 2022
Notification Due Apr 29, 2022
Final Version Due May 6, 2022
Categories    computational social science   natural language processing   social media   computer science

Call For Papers

Co-located with ICWSM 2022, 6 June 2022, Hybrid format - Atlanta, Georgia (US) and online

The automatic or semiautomatic analysis of textual data is a key approach to analyse the massive amounts of user-generated content online, from the identification of sentiment in text and topic classification to the detection of abusive language, misinformation or propaganda. However, the development of such systems faces a crucial challenge. Static benchmarking datasets and performance metrics are the primary method for measuring progress in the field, and the publication of research on new systems typically requires demonstrating an improvement over state-of-the-art approaches in this way. Yet, these performance metrics can obscure critical failings in current models. Improvements in metrics often do not reflect improvements in the real-world performance of models. There is clearly a need to rethink performance evaluation for text classification and analysis systems to be usable and trustable.

If unreliable systems achieve astonishing scores with traditional metrics, how do we recognise progress when we see it? The goal of the Workshop on Novel Evaluation Approaches for Text Classification Systems on Social Media (NEATCLasS) is to promote the development and use of novel metrics for abuse detection, hate speech recognition, sentiment analysis and similar tasks within the community, to better be able to measure whether models really improve upon the state of the art, and to encourage a wide range of models to be tested on these new metrics.

Recently there have been attempts to address the problem of benchmarks and metrics that do not represent performance well. For example, in abusive language detection, there are both static datasets of hard-to-detect examples (Röttger et al. 2021) and dynamic approaches for generating such examples (Calabrese et al. 2021). On the platform DynaBench (Kiela et al. 2021), benchmarks are dynamic and constantly updated with hard-to-classify examples, avoiding overfitting a predetermined dataset. However, these approaches only capture a tiny fraction of issues with benchmarking. There is still much work to do.

For the first edition of the workshop on Novel Evaluation Approaches for Text Classification Systems (NEATCLasS) we welcome submissions discussing such new evaluation approaches, introducing new or refining existing ones, promoting the use of novel metrics for abuse detection, sentiment analysis and similar tasks within the community. Furthermore, the workshop will promote discussion on the importance, potential and danger of disagreement in tasks that require subjective judgements. This discussion will also focus on how to evaluate human annotations, and how to find the most suitable set of annotators (if any) for a given instance and task. The workshop will solicit, among others, research papers about
* Issues with current evaluation metrics and benchmarking datasets
* New evaluation metrics
* User-centred (qualitative or quantitative) evaluation of social media text analysis tools
* Adaptations and translations of novel evaluation metrics for other languages
* New datasets for benchmarking
* Increasing data quality in benchmarking datasets, e.g., avoidance of selection bias, identification of suitable expert human annotators for tasks involving subjective judgements
* Systems that facilitate dynamic evaluation and benchmarking
* Models that perform better at hard-to-classify instances and novel evaluation metrics such as AAA, DynaBench and HateCheck
* Bias, error analysis and model diagnostics
* Phenomena not captured by existing evaluation metrics (such as models making the right predictions for the wrong reason)
* Approaches to mitigating bias and common errors
* Alternative designs for NLP competitions that evaluate a wide range of model characteristics (such as bias, error analysis, cross-domain performance)
* Challenges of downstream applications (in industry, computational social science, computational communication science, and others) and reflections on how these challenges can be captured in evaluation metrics

Format and Submissions

The workshop will take place as a full-day meeting on 6 June. Participants will be invited to trial an innovative format for paper presentations: presenters will be given 5 minutes to describe their research questions and hypotheses, and a group discussion will start after that. Then, presenters will be given 5 more minutes to describe their method and results, followed by a new group discussion about the interpretation and implications of such results. In the afternoon there will be collaborative group activities to bring researchers together and collect ideas for new evaluation approaches and future work in the field. We will discuss how we can organise competitions when there are multiple evaluation metrics and benchmarking datasets are dynamic.

We invite research papers (8 pages), position and short papers (4 pages), and demo papers (2 pages). Submissions must be original and should not have been published previously or be under consideration for publication while being evaluated for this workshop. Submissions will be evaluated by the program committee based on the quality of the work and its fit to the workshop themes. All submissions should be double-blind and a high-resolution PDF of the paper should be uploaded to the EasyChair submission site (link TBD) before the paper submission deadline. All papers must be submitted, and formatted in AAAI two-column, camera-ready style. Authors of accepted papers will have the opportunity to publish their papers through workshop proceedings by the AAAI Press. Submission instructions will be uploaded to the workshop web page in due course:

* Submission link:
* UPDATED Papers submission deadline: April 10, 2022
* UPDATED Paper acceptance notification: April 29, 2022
* UPDATED Final camera-ready paper due: May 6, 2022
* Workshop Day: June 6, 2022


Björn Ross, University of Edinburgh
Roberto Navigli, Sapienza University of Rome
Agostina Calabrese, University of Edinburgh

Related Resources

ENASE 2024   19th International Conference on Evaluation of Novel Approaches to Software Engineering
Ei/Scopus-AACIP 2024   2024 2nd Asia Conference on Algorithms, Computing and Image Processing (AACIP 2024)-EI Compendex
NLE Special Issue 2024   Natural Language Engineering- Special issue on NLP Approaches for Computational Analysis of Social Media Texts for Online Well-being and Social Order
IEEE-Ei/Scopus-ACEPE 2024   2024 IEEE Asia Conference on Advances in Electrical and Power Engineering (ACEPE 2024) -Ei Compendex
NovelIQA 2024   Novel Approaches to Image Quality Assessment
SPIE-Ei/Scopus-ITNLP 2024   2024 4th International Conference on Information Technology and Natural Language Processing (ITNLP 2024) -EI Compendex
CAiSE 2024   36th International Conference on Advanced Information Systems Engineering
GreeNet Symposium - SGNC 2024   15th Symposium on Green Networking and Computing (SGNC 2024)
SNAM-Special Issue 2024   Datasets, Language Resources and Algorithmic Approaches on Online Wellbeing and Social Order in Asian Languages
IEEE-Ei/Scopus-SGGEA 2024   2024 Asia Conference on Smart Grid, Green Energy and Applications (SGGEA 2024) -EI Compendex