posted by organizer: vinczev || 690 views || tracked by 1 users: [display]

ST-VMWE 2018 : Shared task on automatic identification of verbal multiword expressions – edition 1.1


When Apr 4, 2018 - Aug 26, 2018
Where N/A
Submission Deadline May 4, 2018

Call For Papers


Shared task on automatic identification of verbal multiword expressions – edition 1.1

*Apologies for cross-posting*

The second edition of the PARSEME shared task on automatic identification of verbal multiword expressions (VMWEs) aims at identifying verbal MWEs in running texts. Verbal MWEs include, among others, idioms (*to let the cat out of the bag*), light verb constructions (*to make a decision*), verb-particle constructions (*to give up*), multi-verb constructions (*to make do*) and inherently reflexive verbs (*se suicider* 'to suicide' in French). Their identification is a well-known challenge for NLP applications, due to their complex characteristics including discontinuity, non-compositionality, heterogeneity and syntactic variability.

The shared task is highly multilingual: PARSEME members have elaborated annotation guidelines based on annotation experiments in about 20 languages from several language families. These guidelines take both universal and language-specific phenomena into account. We hope that this will boost the development of language-independent and cross-lingual VMWE identification systems.


Participation is open and free worldwide.

We ask potential participant teams to register using the expression of interest form:

Task updates and questions will be posted to our public mailing list:

More details on the annotated corpora can be found here:

The annotation guidelines used in manual annotation of the training and test sets are available here:

Publication and workshop

Shared task participants will be invited to submit a system description paper to a special track of the Joint Workshop on Linguistic Annotation, Multiword Expressions and Constructions (LAW-MWE-CxG-2018) at COLING 2018, to be held on August 25-26, 2018, in Santa Fe, New Mexico, USA:
Submitted system description papers must follow the workshop submission instructions and will go through double-blind peer reviewing by other participants and selected LAW-MWE-CxG-2018 program committee members. Their acceptance depends on the quality of the paper rather than on the results obtained in the shared task. Authors of the accepted papers will present their work as posters/demos in a dedicated session of the workshop, collocated with COLING 2018 The submission of a system description paper is not mandatory.
Due to double blind review, participants are asked to provide a nickname (i.e. a name that does not identify authors, universities, research groups etc.) for their systems when submitting results and in the submitted papers. True system names will be used only in the camera-ready paper.

Provided data

For each language, we provide to the participants corpora in which VMWEs are annotated according to universal guidelines:

* Manually annotated **training corpora** made available to the participants in advance, in order to allow them to train their systems.
* Manually annotated **development corpora** also made available in advance so as to tune/optimize the systems' parameters.
* Raw (unannotated) **test corpora** to be used as input to the systems during the evaluation phase. The VMWE annotations in this corpus will be kept secret.

The training and development sets are available at:

When available, morphosyntactic data (parts of speech, lemmas, morphological features and/or syntactic dependencies) are also provided. Depending on the language, the information comes from treebanks (e.g., Universal Dependencies) or from automatic parsers trained on treebanks (e.g., UDPipe).

We have prepared corpora for the following languages: Bulgarian (BG), German (DE), Greek (EL), English (EN), Spanish (ES), Farsi (FA), French (FR), Hindi (HI), Croatian (HR), Hungarian (HU), Italian (IT), Lithuanian (LT), Polish (PL), Brazilian Portuguese (PT), Romanian (RO), Slovene (SL), Turkish (TR).
The amount of annotated data depends on the language.
Release for the Basque and the Hebrew data has been postponed until about 11 April.

System results can be submitted in two tracks:
* **Closed track**: Systems using only the provided training data - VMWE annotations + morpho-syntactic data (if any) - to learn VMWE identification models and/or rules.
* **Open track**: Systems using or not the provided training data, plus any additional resources deemed useful (MWE lexicons, symbolic grammars, wordnets, raw corpora, word embeddings, language models trained on external data, etc.). This track includes notably purely symbolic and rule-based systems.

Teams submitting systems in the open track will be requested to describe and provide references to all resources used at submission time. Teams are encouraged to favor freely available resources for better reproducibility of their results.
Further details on system submissions are to be announced in due time.

Evaluation metrics
Participants will provide the output produced by their systems on the test corpus. This output will be compared with the gold standard (ground truth).
Further details on the evaluation metrics can be found here:

Important dates
April 4, 2018: shared task training data released
April 30, 2018: shared task blind test data released
May 4, 2018: submission of system results
May 11, 2018: announcement of results
May 25, 2018: submission of system description papers
June 20, 2018: notification
June 30, 2018: camera-ready papers
August 25-26, 2018: shared task workshop colocated with LAW-MWE-CxG-2018

Organizing team
Silvio Ricardo Cordeiro, Carlos Ramisch, Agata Savary, Veronika Vincze


Related Resources

SMM4H at NAACL 2021 2021   CFP ProfNER shared task: Identification of professions & occupations in Health-related Social Media (SMM4H at NAACL)
FinCausal 2021   Call for participation: FinCausal-2021 Shared Task, 3rd Financial Narrative Processing Workshop (FNP 2021)
AT4SVL 2021   First International Workshop Automatic Translation for Sign and Verbal Languages
[IJCAI 2021] FinSIM-3 Shared Task 2021   Learning Semantic Similarities for the Financial Domain
CCAC 2021   5th Colombian Conference on Automatic Control
SMM4H-ST 2021   6th Social Media Mining for Health (SMM4H) Shared Tasks at NAACL 2021
CASE 2021   Call for Papers and Shared Task Participation: Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE) @ ACL-IJCNLP 2021
TIAD 2021   4th Translation Inference Across Dictionaries – Shared Task & Workshop @ LDK 2021
[WWW 2021] FinSBD-3 Shared Task 2021   Structure Boundary Detection, an extension of Sentence Boundary Detection in PDF Noisy Text in the Financial Domain
[WWW 2021] FinSIM-2 Shared Task 2021   Learning Semantic Similarities for the Financial Domain