|
| |||||||||||||
TALA 2026 : Workshop on New Trends in Automatized Language Assessment | |||||||||||||
| |||||||||||||
Call For Papers | |||||||||||||
|
Call for papers - Workshop on New Trends in Automatized Language Assessment (TALA)
The Workshop on New Trends in Automatized Language Assessment (TALA) will take place on 7 April 2026 in Louvain-la-Neuve, Belgium, and online (hybrid event). This meeting aims to provide an overview of various recent approaches of automatized language assessment and to offer researchers, academics and (PhD) students an excellent opportunity to share and discuss recent trends and cutting-edge methods on language assessment-related research. In particular, the workshop will focus on proficiency assessment by mainly targeting automatic readability assessment (ARA) and automated essay scoring (AES). Automatic readability assessment constitutes an interdisciplinary field of research concerned with the linguistic, cognitive, and typographic factors that influence the ease with which a text can be read and understood by different audiences. It is gaining increasing importance across a wide range of domains, including education, institutional communication, digital accessibility, and automated assessment of language proficiency. It has been an active field within natural language processing since the beginning of the 21st century. Automated essay scoring aims to analyze written productions in order to generate an evaluation of writers’ competence in a specific field. For language-oriented AES, it is the written linguistic skills that are targeted. This task is particularly critical in language assessment contexts, but it can also support learning processes and the generation of formative feedback. The workshop will include an invited speaker talk and some presentations based on abstract selection. Invited speaker: Rodrigo Wilkens (University of Exeter) is a specialist in computational readability modeling and automated essay scoring. His research focuses on multilingual proficiency assessment, linguistic feature modeling, and the use of large language models for educational applications. He has contributed to the development and evaluation of ARA and AES systems, with particular emphasis on non-English languages. His recent work explores the representational capacity of transformer models for proficiency prediction, interpretability in automated assessment, and readability-guided text generation. We welcome abstracts addressing literature review, research results, ongoing research or negative results on the topics related to the main theme, with particular interest in the following subfields: · AI and LLM-based approaches to automated language assessment, especially AES and ARA · Computational and linguistic modeling of readability and writing proficiency · Evaluation methodologies, validation frameworks, and interpretability in automated assessment · Multilingual and non-English language assessment · Corpus creation, annotation schemes, and new benchmark tasks · Fairness, bias, and ethical considerations in automated assessment · Theoretical perspectives linking linguistic features and proficiency modeling · Critical reviews and meta-analyses in ARA/AES research Submission format: Abstracts may be submitted in French or English and should be between 300 and 500 words. Authors are encouraged to include a short list of relevant references, which will not count toward the word limit. Abstracts should be anonymized for review. Author names and affiliations must be provided separately in the submission form. Submissions must be made via the online form available at: https://forms.gle/w1L6JNx8YAEgtswB7. Please note that no proceedings will be later organized, as this workshop aims to foster scientific exchanges above all. Important dates: Abstract deadline: 20 March Acceptance notification: 1st of April Workshop: 7 April Organizing Committee: Prof. Thomas François, Prof. Rodrigo Wilkens, Dr. Eleonora Guzzi, Lingyun Gao, Amandine Pay, Elodie Vanzeveren, Romane Werner. |
|