posted by user: irehbein || 1037 views || tracked by 2 users: [display]

DiSpoL 2015 : Identification and Annotation of Discourse Relations in Spoken Language


When Oct 1, 2015 - Oct 2, 2015
Where Saarbrücken, Germany
Submission Deadline Jul 15, 2015
Notification Due Aug 1, 2015
Categories    discourse   spoken language   annotation   discourse relations

Call For Papers

Call for Abstracts

TextLink Workshop: "Identification and Annotation of Discourse Relations in Spoken Language" (DiSpoL 2015)

Saarbrücken, Germany, October 1-2, 2015

Invited speaker: Jacob Eisenstein (Georgia Institute of Technology)

Important dates

Submission of abstracts 15 July 2015
Notification of acceptance 01 August 2015
Registration 15 August 2015
DiSpoL workshop October 1st and 2nd in Saarbrücken/Germany

The DiSpoL Workshop aims at bringing together researchers from different areas working on theoretical and methodological issues in the field of discourse and corpus annotation.

Effective discourse in any language is characterised by clear relations between sentences and coherent structure. But languages vary in how relations and structure are signaled, and the same is true for different text types and modes. While most discourse-annotated resources are based on written text, less work has been done on annotating and investigating discourse relations in spoken language. It is exactly this domain in which we expect crucial differences concerning the realisation of discourse relations and discourse structure.

The DiSpoL workshop will focus on the identification and annotation of discourse relations in spoken language and in spoken-like text types such as computer-mediated communication or user-generated content from social media. This will raise questions concerning additional properties (and classes) of discourse-relational devices (DRDs) present in spoken data that are not captured by frameworks developed for written texts. How can these be identified and annotated in terms of categories designed for written language?
The overall goal of the workshop is the development of a unified annotation framework for DRDs that will increase the inter-operability of existing resources (such as the Penn Discourse Treebank, the RST Discourse Treebank, ...) and that is applicable to different languages and text types, including spoken and written dimensions.

We invite the submission of abstracts that address research on DRDs in spoken (or spoken-like) language in one of the following areas:

* functional or formal classifications
* features in DRDs description and classification that are relevant for spoken(-like) language
* multilingual analysis of DRDs in spoken(-like) language
* psycholinguistic/experimental approaches to the investigation of discourse relations in spoken(-like) language

The selected presentations will contribute to the issues discussed within the workshop.

Submission Information

We invite the submission of abstracts of up to one page of content (excluding references). All submissions have to be in pdf format. We will be using the EasyChair conference system to manage submissions:

Related Resources

EMNLP-IJCNLP 2019   Conference on Empirical Methods in Natural Language Processing & International Joint Conference on Natural Language Processing 2019
IWSDS 2019   International Workshop on Spoken Dialogue Systems Technology
eKNOW 2019   The Eleventh International Conference on Information, Process, and Knowledge Management
ARDUOUS 2019   3rd International Workshop on Annotation of useR Data for UbiquitOUs Systems
CONLL 2019   The SIGNLL Conference on Computational Natural Language Learning
LAW XIII 2019   The 13th Linguistic Annotation Workshop (The LAW XIII)
RANLP 2019   Recent Advances in Natural Language Processing
DIPE 2019   The Transformation and Reproduction of Social Inequalities: Discourse, Power, and Critique
IWSDS 2019   International Workshop on Spoken Dialogue Systems Technology - Call for Workshops and Special Sessions
KLP@SAC 2019   Knowledge and Language Processing Track @ The 34th ACM Symposium on Applied Computing - ACM SAC 2019