posted by user: grupocole || 605 views || tracked by 4 users: [display]

MT-Eval 2016 : Translation evaluation: From fragmented tools and data sets to an integrated ecosystem


When May 24, 2016 - May 24, 2016
Where Portorož, Slovenia
Submission Deadline Feb 15, 2016
Notification Due Mar 1, 2016
Final Version Due Mar 31, 2016
Categories    NLP

Call For Papers

LREC 2016 Workshop

Translation evaluation:
From fragmented tools and data sets to an integrated ecosystem

24 May 2016, Portorož, Slovenia

Deadline for submissions: 15 February 2016

This workshop takes an in-depth look at an area of ever-increasing
importance: approaches, tools and data support for the evaluation of human
translation (HT) and machine translation (MT), with a focus on MT. Two clear
trends have emerged over the past several years. The first trend involves
standardising evaluations in research through large shared tasks in which
actual translations are compared to reference translations using automatic
metrics and/or human ranking. The second trend focuses on achieving high
quality translations with the help of increasingly complex data sets that
contain many levels of annotation based on sophisticated quality metrics
%G–%@ often organised in the context of smaller shared tasks. In
industry, we also observe an increased interest in workflows for high
quality outbound translation that combine Translation Memory (TM)/Machine
Translation and post-editing. In stark contrast to this trend to quality
translation (QT) and its inherent overall approach and complexity, the data
and tooling landscapes remain rather heterogeneous, uncoordinated and not

The event will bring together MT and HT researchers, users and providers of
tools, and users and providers of manual and automatic evaluation
methodologies currently used for the purpose of evaluating HT and MT
systems. The key objective of the workshop is to initiate a dialogue and
discuss whether the current approach involving a diverse and heterogeneous
set of data, tools and evaluation methodologies is appropriate enough or if
the community should, instead, collaborate towards building an integrated
ecosystem that provides better and more sustainable access to data sets,
evaluation workflows, approaches and metrics and supporting processes such
as annotation, ranking and so on.

The workshop is meant to stimulate a dialogue about the commonalities,
similarities and differences of the existing solutions in the three areas
(1) tools, (2) methodologies, (3) data sets. A key question concerns the
high level of flexibility and lack of interoperability of heterogeneous
approaches, while a homogeneous approach would provide less flexibility but
higher interoperability. How much flexibility and interoperability does the
MT/HT research community need? How much does it want?

- MT/HT evaluation methodologies (incl. scoring mechanisms, integrated
- Benchmarks for MT evaluation
- Data and annotation formats for the evaluation of MT/HT
- Workbenches, tools, technologies for the evaluation of MT/HT
(incl. specialised workflows)
- Integration of MT/TM, and terminology in industrial evaluation scenarios
- Evaluation ecosystems
- Annotation concepts such as MQM, DQF and their implementation in MT
evaluation processes

We invite contributions on the topics mentioned above and any related topics
of interest. The workshop website provides some additional information.

Important dates
- Publication of the call for papers: 10 December 2015
- Submissions due: 15 February 2016
- Notification of acceptance: 1 March 2016
- Final version of accepted papers: 31 March 2016
- Final programme and online proceedings: 15 April 2016
- Workshop: 24 May 2016 (this event will be a full-day workshop)

Please submit your papers at
before the deadline of 15 February 2016. Accepted papers will be presented
as oral presentations or as posters. All accepted papers will be published
in the workshop proceedings.

Papers should be formatted according to the stylesheet soon to be provided
on the LREC 2016 website and should not exceed 8 pages, including references
and appendices. Papers should be submitted in PDF format through the URL
mentioned above.

When submitting a paper, authors will be asked to provide essential
information about resources (in a broad sense, i.e., also technologies,
standards, evaluation kits, etc.) that have been used for the work described
in the paper or are a new result of your research. Moreover, ELRA encourages
all LREC authors to share the described LRs (data, tools, services, etc.) to
enable their reuse and replicability of experiments (including evaluation

Programme committee
Nora Aranberri, University of the Basque Country, Spain
Ondrej Bojar, Charles University in Prague, Czech Republic
Aljoscha Burchardt, Deutsches Forschungszentrum für Künstliche Intelligenz
(DFKI), Germany
Christian Dugast, Deutsches Forschungszentrum für Künstliche Intelligenz
(DFKI), Germany
Marcello Federico, Fondazione Bruno Kessler (FBK), Italy
Christian Federmann, Microsoft, USA
Rosa Gaudio, Higher Functions, Portugal
Josef van Genabith, Deutsches Forschungszentrum für Künstliche Intelligenz
(DFKI), Germany
Barry Haddow, University of Edinburgh, UK
Jan Hajic, Charles University in Prague, Czech Republic
Kim Harris, text&form, Germany
Matthias Heyn, SDL, Belgium
Philipp Koehn, Johns Hopkins University, USA, and University of Edinburgh, UK
Christian Lieske, SAP, Germany
Lena Marg, Welocalize, UK
Katrin Marheinecke, text&form, Germany
Matteo Negri, Fondazione Bruno Kessler (FBK), Italy
Martin Popel, Charles University in Prague, Czech Republic
Jörg Porsiel, Volkswagen AG, Germany
Georg Rehm, Deutsches Forschungszentrum für Künstliche Intelligenz (DFKI),
Rubén Rodriguez de la Fuente, PayPal, Spain
Lucia Specia, University of Sheffield, UK
Marco Turchi, Fondazione Bruno Kessler (FBK), Italy
Hans Uszkoreit, Deutsches Forschungszentrum für Künstliche Intelligenz
(DFKI), Germany

This workshop is a joint activity of the EU projects QT21 and CRACKER.

Related Resources

ACL 2017   The 55th annual meeting of the Association for Computational Linguistics
MT Journal Special Issue on SLT 2016   Machine Translation Journal: Special Issue on Spoken Language Translation
IJAIA 2016   International Journal of Artificial Intelligence & Applications
EACL SRW 2017   EACL 2017 Student Research Workshop
ICTAI 2016   IEEE International Conference on Tools with Artificial Intelligence
IJNLC 2016   International Journal on Natural Language Computing
SI: CAfBMD 2016   Springer Multimedia Tools and Applications Special Issue on Content Analysis for Big Multimedia Data
MEMOCODE 2016   ACM-IEEE International Conference on Formal Methods and Models for System Design
NoDaLiDa 2017   21st Nordic Conference on Computational Linguistics