posted by organizer: vinczev || 298 views || tracked by 1 users: [display]

ST-VMWE 2018 : Shared task on automatic identification of verbal multiword expressions – edition 1.1

FacebookTwitterLinkedInGoogle

Link: http://multiword.sourceforge.net/sharedtask2018
 
When Apr 4, 2018 - Aug 26, 2018
Where N/A
Submission Deadline May 4, 2018
 

Call For Papers

** CALL FOR PARTICIPATION**

Shared task on automatic identification of verbal multiword expressions – edition 1.1
http://multiword.sourceforge.net/sharedtask2018
=======================================================================

*Apologies for cross-posting*

The second edition of the PARSEME shared task on automatic identification of verbal multiword expressions (VMWEs) aims at identifying verbal MWEs in running texts. Verbal MWEs include, among others, idioms (*to let the cat out of the bag*), light verb constructions (*to make a decision*), verb-particle constructions (*to give up*), multi-verb constructions (*to make do*) and inherently reflexive verbs (*se suicider* 'to suicide' in French). Their identification is a well-known challenge for NLP applications, due to their complex characteristics including discontinuity, non-compositionality, heterogeneity and syntactic variability.

The shared task is highly multilingual: PARSEME members have elaborated annotation guidelines based on annotation experiments in about 20 languages from several language families. These guidelines take both universal and language-specific phenomena into account. We hope that this will boost the development of language-independent and cross-lingual VMWE identification systems.


Participation
-------------

Participation is open and free worldwide.

We ask potential participant teams to register using the expression of interest form:
https://docs.google.com/forms/d/e/1FAIpQLSd6L8IntkNKXbMp8QVLLvCYzzhoH-_8ovSW0DL3BtYGNnsFhA/viewform?c=0&w=1

Task updates and questions will be posted to our public mailing list:
http://groups.google.com/group/verbalmwe

More details on the annotated corpora can be found here:
https://typo.uni-konstanz.de/parseme/index.php/2-general/202-parseme-shared-task-on-automatic-identification-of-verbal-mwes-edition-1-1

The annotation guidelines used in manual annotation of the training and test sets are available here:
http://parsemefr.lif.univ-mrs.fr/parseme-st-guidelines/1.1

Publication and workshop
------------------------

Shared task participants will be invited to submit a system description paper to a special track of the Joint Workshop on Linguistic Annotation, Multiword Expressions and Constructions (LAW-MWE-CxG-2018) at COLING 2018, to be held on August 25-26, 2018, in Santa Fe, New Mexico, USA: http://multiword.sourceforge.net/lawmwecxg2018
Submitted system description papers must follow the workshop submission instructions and will go through double-blind peer reviewing by other participants and selected LAW-MWE-CxG-2018 program committee members. Their acceptance depends on the quality of the paper rather than on the results obtained in the shared task. Authors of the accepted papers will present their work as posters/demos in a dedicated session of the workshop, collocated with COLING 2018 The submission of a system description paper is not mandatory.
Due to double blind review, participants are asked to provide a nickname (i.e. a name that does not identify authors, universities, research groups etc.) for their systems when submitting results and in the submitted papers. True system names will be used only in the camera-ready paper.

Provided data
-------------

For each language, we provide to the participants corpora in which VMWEs are annotated according to universal guidelines:

* Manually annotated **training corpora** made available to the participants in advance, in order to allow them to train their systems.
* Manually annotated **development corpora** also made available in advance so as to tune/optimize the systems' parameters.
* Raw (unannotated) **test corpora** to be used as input to the systems during the evaluation phase. The VMWE annotations in this corpus will be kept secret.

The training and development sets are available at: https://gitlab.com/parseme/sharedtask-data/tree/master/1.1

When available, morphosyntactic data (parts of speech, lemmas, morphological features and/or syntactic dependencies) are also provided. Depending on the language, the information comes from treebanks (e.g., Universal Dependencies) or from automatic parsers trained on treebanks (e.g., UDPipe).

We have prepared corpora for the following languages: Bulgarian (BG), German (DE), Greek (EL), English (EN), Spanish (ES), Farsi (FA), French (FR), Hindi (HI), Croatian (HR), Hungarian (HU), Italian (IT), Lithuanian (LT), Polish (PL), Brazilian Portuguese (PT), Romanian (RO), Slovene (SL), Turkish (TR).
The amount of annotated data depends on the language.
Release for the Basque and the Hebrew data has been postponed until about 11 April.


Tracks
------
System results can be submitted in two tracks:
* **Closed track**: Systems using only the provided training data - VMWE annotations + morpho-syntactic data (if any) - to learn VMWE identification models and/or rules.
* **Open track**: Systems using or not the provided training data, plus any additional resources deemed useful (MWE lexicons, symbolic grammars, wordnets, raw corpora, word embeddings, language models trained on external data, etc.). This track includes notably purely symbolic and rule-based systems.

Teams submitting systems in the open track will be requested to describe and provide references to all resources used at submission time. Teams are encouraged to favor freely available resources for better reproducibility of their results.
Further details on system submissions are to be announced in due time.

Evaluation metrics
------------------
Participants will provide the output produced by their systems on the test corpus. This output will be compared with the gold standard (ground truth).
Further details on the evaluation metrics can be found here: http://multiword.sourceforge.net/PHITE.php?sitesig=CONF&page=CONF_04_LAW-MWE-CxG_2018&subpage=CONF_50_Evaluation_metrics

Important dates
-----------------
April 4, 2018: shared task training data released
April 30, 2018: shared task blind test data released
May 4, 2018: submission of system results
May 11, 2018: announcement of results
May 25, 2018: submission of system description papers
June 20, 2018: notification
June 30, 2018: camera-ready papers
August 25-26, 2018: shared task workshop colocated with LAW-MWE-CxG-2018


Organizing team
---------------
Silvio Ricardo Cordeiro, Carlos Ramisch, Agata Savary, Veronika Vincze

Contact: parseme-st-core@nlp.ipipan.waw.pl

Related Resources

CL-SciSumm 2018   [CfP] CL-SciSumm Shared Task 2018 @SIGIR’2018: The Scientific Summarization Shared Task
IWSPA-AP 2018   First Security and Privacy Analytics Anti-Phishing Shared Task
DMD 2018   Shared task on Detecting Malicious Domain names (DMD 2018)
RUSSE 2018   A Shared Task on Word Sense Induction and Disambiguation for the Russian Language
FG 2019   The 14th IEEE International Conference on Automatic Face and Gesture Recognition
IIKI 2018   INTERNATIONAL CONFERENCE ON IDENTIFICATION, INFORMATION AND KNOWLEDGE IN THE INTERNET OF THINGS FOR 2018
WiMob 2018   14th International Conference on Wireless and Mobile Computing, Networking and Communications
SAC 2019   The 34th ACM Symposium on Applied Computing
SPECOM 2018   20th International Conference on Speech and Computer
DS 2018   21st International Conference on Discovery Science