posted by user: orballo || 1005 views || tracked by 3 users: [display]

PE 2008 : Cross-Framework and Cross-Domain Parser Evaluation

FacebookTwitterLinkedInGoogle

Link: http://tp://lingo.stanford.edu/events/08/pe/
 
When Aug 23, 2008 - Aug 23, 2008
Where anchester, UK
Submission Deadline May 5, 2008
Notification Due Jun 6, 2008
Categories    NLP
 

Call For Papers

CALL FOR PAPERS

22nd International Conference on Computational Linguistics

Workshop on Cross-Framework and Cross-Domain Parser Evaluation
August 23 --- Manchester, UK

http://lingo.stanford.edu/events/08/pe/



Background and Motivation
-------------------------

Broad-coverage parsing has come to a point where distinct approaches
can offer (seemingly) comparable performance: statistical parsers
acquired from the PTB; data-driven dependency parsers; `deep' parsers
trained off enriched treebanks (in linguistic frameworks like CCG,
HPSG, or LFG); and hybrid `deep' parsers, employing hand-built grammars
in, for example, HPSG, LFG, or LTAG. Evaluation against trees in the
WSJ section of the Penn Treebank (PTB) has helped advance parsing
research over the course of the past decade. Despite some scepticism,
the crisp and, over time, stable task of maximizing ParsEval metrics
over PTB trees has served as a dominating benchmark. However, modern
treebank parsers still restrict themselves to only a subset of PTB
annotation; there is reason to worry about the idiosyncrasies of this
particular corpus; it remains unknown how much the ParsEval metric (or
any intrinsic evaluation) can inform NLP application developers; and
PTB-style analyses leave a lot to be desired in terms of linguistic
information.

This workshop aims to bring together developers of broad-coverage
parsers who are interested in questions of target representations and
cross-framework and cross-domain evaluation and benchmarking. From
informal discussions that the co-organizers had among themselves and
with colleagues, it seems evident that there is comparatively broad
awareness of current issues in parser evaluation, and a lively interest
in detailed exchange of experience (and beliefs). Specifically, the
organizers hope to attract representatives from diverse parsing
approaches and frameworks, ranging from `traditional' treebank parsing,
over data-driven dependency parsing, to parsing in specific linguistic
frameworks. Quite likely for the first time in the history of these
approaches, there now exist large, broad-coverage, parsing systems
representing diverse traditions that can be applied to running text,
often producing comparable representations. In our view, these recent
developments present a new opportunity for re-energizing parser
evaluation research.


Call for Papers
---------------

The workshop organizers invite papers on all aspects of parser
evaluation, qualitative and quantitative, including but not limited to:

+ in-depth or contrastive evaluation of parsing systems;
+ methology, test data, and technology for parser evaluation;
+ reflections on existing standards and evaluation metrics;
+ correlations between intrinsic and extrinsic parser evaluation;
+ proposals for new target representations or success measures.

Seeing the general theme of this workshop, submissions that discuss
aspects of cross-framework, cross-domain, or cross-linguistic parser
evaluation are especially welcome.

One of the workshop goals is to establish an improved shared knowledge
among participants of the strengths and weaknesses of extant annotation
and evaluation schemes. In order to create a joint focus and in-depth
discussion, there will be a `lightweight' shared task. For a selection
of 50 sentences (of which ten are considered obligatory, the rest
optional) for which PTB, GR, and PropBank (and maybe other) annotations
are available, we will invite contributors to scrutinize existing
gold-standard representations contrastively, identify perceived
deficiencies, and sketch what can be done to address these. As an
optional component, participants in the shared task are welcome to
include `native', framework-specific output representations and actual
results for a parsing system of their choice (be it their own or not)
in the contrastive study. In either case, submissions to the shared
task should aim to reflect on the nature of different representations,
highlight which additional distinctions are made in either scheme, and
argue why these are useful (for some task) or unmotivated (in general).

Please see the workshop web pages for detailed submission information:

http://lingo.stanford.edu/events/08/pe/


Important Dates
---------------

+ Initial Call for Papers March 1
+ Shared Task Release March 22
+ Paper Submission Deadline May 5
+ Notification of Acceptance June 6
+ Camera-Ready Papers Deadline July 1
+ One-Day Workshop August 23


Workshop Organizers and Programme Committee
-------------------------------------------

The workshop aims to appeal to a wide range of researchers across
frameworks, hence it has a relatively large and diverse group of
organizers. The co-organizers will jointly make all decisions
regarding the workshop form and programme, and it is expected that
most of the co-organizers participate in the actual workshop.

+ Johan Bos, University of Rome `La Sapienza' (Italy)
+ Edward Briscoe, University of Cambridge (UK)
+ Aoife Cahill, University of Stuttgart (Germany)
+ John Carroll, University of Sussex (UK)
+ Stephen Clark, Oxford University (UK)
+ Ann Copestake, University of Cambridge (UK)
+ Dan Flickinger, Stanford University (USA)
+ Josef van Genabith, Dublin City University (Ireland)
+ Julia Hockenmaier, University of Illinois at Urbana-Champaign (USA)
+ Aravind Joshi, University of Pennsylvania (USA)
+ Ronald Kaplan, Powerset, Inc. (USA)
+ Tracy Holloway King, PARC (USA)
+ Sandra Kuebler, Indiana University (USA)
+ Dekang Lin, Google Inc. (USA)
+ Jan Tore Lønning, University of Oslo (Norway)
+ Christopher Manning, Stanford University (USA)
+ Yusuke Miyao, University of Tokyo (Japan)
+ Joakim Nivre, Växjö and Uppsala Universities (Sweden)
+ Stephan Oepen, University of Oslo (Norway) and CSLI Stanford (USA)
+ Kenji Sagae, Tokyo University (Japan)
+ Nianwen Xue, University of Colorado (USA)
+ Yi Zhang, DFKI GmbH and Saarland University (Germany)

Please see the workshop web pages for additional contact information.

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++ Universitetet i Oslo (IFI); Boks 1080 Blindern; 0316 Oslo; (+47) 2284 0125
+++ CSLI Stanford; Ventura Hall; Stanford, CA 94305; (+1 650) 723 0515
+++ --- oe@ifi.uio.no; oe@csli.stanford.edu; stephan@oepen.net ---
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Related Resources

XP2 2024   Experience 2.0 and Beyond – Engineering Cross Devices and Multiple Realities
ACM-Ei/Scopus-CCISS 2024   2024 International Conference on Computing, Information Science and System (CCISS 2024)
IWFPM 2024   The International Workshop on Federated Process Modeling
AISC 2024   12th International Conference on Artificial Intelligence, Soft Computing
AAAI-MAKE 2024   AAAI 2024 Spring Symposium on Empowering Machine Learning and Large Language Models with Domain and Commonsense Knowledge
DEPLING 2023   International Conference on Dependency Linguistics
CAiSE 2024   36th International Conference on Advanced Information Systems Engineering
TSD 2024   Twenty-seventh International Conference on Text, Speech and Dialogue
LREC-COLING 2024   The 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation
EASE 2024   28th International Conference on Evaluation and Assessment in Software Engineering