posted by user: anoopk || 4550 views || tracked by 9 users: [display]

WMT-Trans 2014 : Translation Shared Task at WMT 2014

FacebookTwitterLinkedInGoogle

Link: http://www.statmt.org/wmt14/translation-task.html
 
When N/A
Where N/A
Submission Deadline Apr 1, 2014
Notification Due Apr 21, 2014
Final Version Due Apr 28, 2014
Categories    NLP   natural language processing
 

Call For Papers

Important Dates for Shared Task:

Release of training data for translation task Early December 2013
Release of training data for quality estimation task January 15, 2014
Test set distributed for translation task February 24, 2014
Submission deadline for translation task February 28, 2014

Paper submission deadline April 1, 2014
Notification of acceptance April 21, 2014
Camera-ready deadline April 28, 2014


The recurring translation task of the WMT workshops focuses mainly on European language pairs, but this year we have introduced English-Hindi as an experimental, low resource language pair. Translation quality will be evaluated on a shared, unseen test set of news stories. We provide a parallel corpus as training data, a baseline system, and additional resources for download. Participants may augment the baseline system or use their own system.

GOALS

The goals of the shared translation task are:

To investigate the applicability of current MT techniques when translating into languages other than English
To examine special challenges in translating between European languages, including word order differences and morphology
To investigate the translation of low-resource, morphologically rich languages
To create publicly available corpora for machine translation and machine translation evaluation
To generate up-to-date performance numbers for European languages in order to provide a basis of comparison in future research
To offer newcomers a smooth start with hands-on experience in state-of-the-art statistical machine translation methods
We hope that both beginners and established research groups will participate in this task.
TASK DESCRIPTION

We provide training data for five language pairs, and a common framework (including a baseline system). The task is to improve methods current methods. This can be done in many ways. For instance participants could try to:

improve word alignment quality, phrase extraction, phrase scoring
add new components to the open source software of the baseline system
augment the system otherwise (e.g. by preprocessing, reranking, etc.)
build an entirely new translation systems
Participants will use their systems to translate a test set of unseen sentences in the source language. The translation quality is measured by a manual evaluation and various automatic evaluation metrics. Participants agree to contribute to the manual evaluation about eight hours of work.
You may participate in any or all of the following language pairs:

French-English
Hindi-English
German-English
Czech-English
Russian-English
For all language pairs we will test translation in both directions. To have a common framework that allows for comparable results, and also to lower the barrier to entry, we provide a common training set and baseline system.
We also strongly encourage your participation, if you use your own training corpus, your own sentence alignment, your own language model, or your own decoder.

If you use additional training data or existing translation systems, you must flag that your system uses additional data. We will distinguish system submissions that used the provided training data (constrained) from submissions that used significant additional data resources. Note that basic linguistic tools such as taggers, parsers, or morphological analyzers are allowed in the constrained condition.

Your submission report should highlight in which ways your own methods and data differ from the standard task. We may break down submitted results in different tracks, based on what resources were used. We are mostly interested in submission that are constraint to the provided training data, so that the comparison is focused on the methods, not on the data used. You may submit contrastive runs to demonstrate the benefit of additional training data.

Related Resources

WMT-Testsuites 2024   'Help us break LLMs' - Test suite sub-task of the Ninth Conference on Machine Translation (WMT24)
ACM-Ei/Scopus-CCISS 2024   2024 International Conference on Computing, Information Science and System (CCISS 2024)
GEM shared task 2024   GEM 2024 multilingual data-to-text and summarization shared task
JARES 2024   International Journal of Advance Robotics & Expert Systems
MLSP 2024   Multilingual Lexical Simplification Pipeline (MLSP) Shared Task @ 19th Workshop on Innovative Use of NLP for Building Educational Applications
AISC 2024   12th International Conference on Artificial Intelligence, Soft Computing
KONVENS-ST/T/WS 2024   Call for Shared Task, Workshop and Tutorial Proposals @ KONVENS 2024
ACM-Ei/Scopus-ITNLP 2024   2024 4th International Conference on Information Technology and Natural Language Processing (ITNLP 2024) -EI Compendex
IberLEF 2024   Call for Task Proposals - IberLEF 2024
NLPTT 2024   5th International Conference on NLP Trends & Technologies