posted by organizer: afried || 4907 views || tracked by 13 users: [display]

LAW-X 2016 : Call for Papers: The 10th Linguistic Annotation Workshop at ACL 2016 (The LAW X)


When Aug 11, 2016 - Aug 12, 2016
Where Berlin
Submission Deadline May 8, 2016
Notification Due Jun 5, 2016
Final Version Due Jun 22, 2016
Categories    linguistic annotation   natural language processing   computational linguistics

Call For Papers


The 10th Linguistic Annotation Workshop (The LAW X)

Held in conjunction with ACL/HLT 2016, Berlin/Germany, August 2016
Sponsored by the ACL Special Interest Group on Annotation (SIGANN)


Linguistic annotation of natural language corpora is the backbone of supervised methods for statistical natural language processing. It also provides valuable data for evaluation of both rule-based and supervised systems and can help formalize and study linguistic phenomena.

The LAW provides a forum for presentation and discussion of innovative research on all aspects of linguistic annotation, including creation/evaluation of annotation schemes, methods for automatic and manual annotation, use and evaluation of annotation software and frameworks, representation of linguistic data and annotations, evaluation of manual and automatically-produced annotations, etc.

We encourage submission of papers relating to this year's theme: Evaluation of Annotation Quality.

This special theme considers current practice in evaluation of linguistic annotations and its successes and failures by asking questions such as: How are we as a community measuring inter-annotator agreement to date, and are there sounder ways to measure it? How can we estimate the annotation quality of existing resources, and what can be done to document annotated data to help others assess its reliability?

We encourage submissions addressing the following topics related to the special theme:
How agreement is measured in various (new or existing) annotation projects, and what the different scores tell us in each case.
Good acceptance thresholds for different annotation tasks and metrics, and/or how to determine them.
Previously proposed but not widely used measures for agreement or annotation quality.
Proposals for quantitative or qualitative methods to measure agreement or annotation quality.
Proposals for documentation of published resources to support their evaluation, means and methods to achieve community evaluation of linguistically-annotated resources, etc.

We welcome submissions of long and short papers, posters, and demonstrations, relating to the topics of the special theme and to any aspect of linguistic annotation, including:

== Annotation procedures ==
- Innovative automated and manual strategies for annotation.
- Machine learning and knowledge-based methods for automation of corpus annotation.
- Creation, maintenance, and interactive exploration of annotation structures and annotated data.

== Annotation evaluation ==
- Inter-annotator agreement and other evaluation metrics and strategies.
- Qualitative evaluation of linguistic representation.

== Annotation access and use ==
- Representation formats/structures for merged annotations of different phenomena, and means to explore/manipulate them.
- Linguistic considerations for merging annotations of distinct phenomena.

== Annotation schemes, guidelines and standards ==
- New and innovative annotation schemes, comparison of annotation schemes.
- Best practices for annotation procedures and/or development and documentation of annotation schemes.
- Interoperability of annotation formats and/or frameworks among different systems as well as different tasks, frameworks, modalities, and languages.
- Results on the application and evaluation of standards for (linguistic) annotation.

== Annotation software and frameworks ==
- Development, evaluation and/or innovative use of annotation software frameworks.


Submissions should report original and unpublished research on topics of interest for the workshop. Accepted papers are expected to be presented at the workshop, and will be published in the workshop proceedings. They should emphasize obtained results rather than intended work, and should indicate clearly the state of completion of the reported results.

A paper accepted for presentation at the workshop must not be or have been presented at any other meeting with publicly available proceedings.

The maximum length is eight (8) pages of content for long papers and four (4) pages of content for short papers, posters, and demonstrations, plus up to two (2) pages of references.
Reviewing of papers will be double-blind. Therefore, the paper must not include the authors' names and affiliations or self-references that reveal the authors’ identity, e.g., "We previously showed (Smith, 1991) ..." should be replaced with citations such as "Smith (1991) previously showed ...". Papers that do not conform to these requirements will be rejected without review.
Authors of papers that have been or will be submitted to other meetings or publications must provide this information to the workshop co-chairs ( Authors of accepted papers must notify the program chairs within 10 days of acceptance if the paper is withdrawn for any reason.

If you have any questions, please feel free to contact the program co-chairs (

All submission deadlines are at 11:59 p.m. PST
8 May 2016: Workshop Paper Due Date
5 June 2016: Notification of Acceptance
22 June 2016: Camera-ready papers due
11-12 August 2016: Workshop Dates

Workshop Chairs
Katrin Tomanek (OpenTable)
Annemarie Friedrich (Saarland University)

Organizing Committee
Stefanie Dipper (Ruhr University Bochum)
Annemarie Friedrich (Saarland University)
Chu-Ren Huang (The Hong Kong Polytechnic University)
Nancy Ide (Vassar College)
Lori Levin (Carnegie-Mellon University)
Adam Meyers (New York University)
Antonio Pareja-Lora (Universidad Complutense de Madrid / ATLAS, UNED)
Massimo Poesio (University of Trento)
Sameer Pradhan (Boulder Learning, Inc.)
Ines Rehbein (Saarland University)
Manfred Stede (University of Potsdam)
Katrin Tomanek (OpenTable)
Fei Xia (University of Washington)
Nianwen Xue (Brandeis University)
Heike Zinsmeister (University of Hamburg)

Program Committee
Adam Meyers
Alexis Palmer
Andreas Witt
Ani Nenkova
Ann Bies
Anna Nedoluzhko
Antonio Pareja-Lora
Aravind Joshi
Kim Gerdes
Archna Bhatia
Barbara Plank
Bonnie Webber
Caroline Sporleder
Christian Chiarcos
Christiane Fellbaum
Chu-Ren Huang
Collin Baker
Dirk Hovy
Djamé Seddah
Els Lefever
Fei Xia
Heike Zinsmeister
Ines Rehbein
Joel Tetreault
James Pustejovsky
Josef Ruppenhofer
Lori Levin
Manfred Pinkal
Manfred Stede
Markus Dickinson
Martha Palmer
Massimo Poesio
Nancy Ide
Nathan Schneider
Nianwen Xue
Nicoletta Calzolari
Omri Abend
Özlem Çetinoğlu
Rebecca Passonneau
Sameer Pradhan
Sandra Kübler
Stefanie Dipper
Tomaž Erjavec
Udo Hahn
Valia Kordoni

Related Resources

ACL-IJCNLP 2021   59t Annual Meeting of the Association for Computational Linguistcs and the 10th International Joint Conference on Natural Language Processing
WiMoA 2021   3th International Conference on Wireless, Mobile Network and Applications
RepL4NLP 2021   Call for Papers - Workshop on Representation Learning for NLP (RepL4NLP) at ACL 2021
CSEIT 2021   8th International Conference on Computer Science, Engineering and Information Technology
CASE 2021   Call for Papers and Shared Task Participation: Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE) @ ACL-IJCNLP 2021
CLNLP 2021   2021 2nd International Conference on Computational Linguistics and Natural Language Processing (CLNLP 2021)
OGEL CircularEconomy 2021   OGEL Call for Papers Special Issue on Circular Economy and the Energy Transition
DTMN 2021   7th International Conference on Data Mining
OGEL Flaring 2021   OGEL Call for papers: Special Issue on Law and Policy for Gas Flaring in a Low-carbon Economy
X-SENTIMENT 2021   X-SENTIMENT@ESWC 6th International Workshop on eXplainable SENTIment Mining and EmotioN deTection