posted by user: kipp || 867 views || tracked by 1 users: [display]

SLTAT 2011 : International Workshop on Sign Language Translation and Avatar Technology

FacebookTwitterLinkedInGoogle

Link: http://embots.dfki.de/SLTAT/index.html
 
When Jan 10, 2011 - Jan 11, 2011
Where Berlin
Submission Deadline TBD
Categories    HCI   sign language
 

Call For Papers

********************************************************************************
International Workshop on Sign Language Translation and Avatar Technology (SLTAT)
********************************************************************************

Time: 10-11 Jan 2011
Place: Federal Ministry of Labour and Social Affairs, Berlin

DESCRIPTION:
Recent advances in virtual character technology and a common trend toward the 3D internet have the potential to achieve full accessibility for the deaf in the internet and to provide key tools to facilitate participation in a hearing world. Prior work on the topic of sign language translation using avatars has explored an impressive range of methods in various languages and application domains.

Future projects will profit from a joint effort of international experts to look back on the work done and identify and specify the remaining problems (or those only partially solved) in specific areas, such as symbolic translation and sign language animation. The motivation is threefold: First, discussing problems in detail will reveal potential solutions and, second, will lead to an exchange of ideas across sign language boundaries. Third, a common agreement on problems hopefully leads to a higher comparability of results and even objective benchmarks.

In this workshop, we want to focus on two core challenges: (1) symbolic translation and (2) sign language animation with avatars. We would like participants to address one of these challenges by conceiving answers to a number of questions that are listed below and additionally, submit a video for challenge 2. The format of the workshop will be single-track, organized in three stages for each challenge: (A) presentation, (B) discussion and (C) synthesis of ideas. We hope to arrive at tangible results that can be published in a volume or journal after the workshop.

During the workshop, we would like participants to give a short presentation (20 mins + 5 mins if a video is shown), covering both own work (50%) but particularly addressing the challenges (50%). Please prepare your abstracts accordingly.

The questions we would like participants to address are:

CHALLENGE 1: SYMBOLIC TRANSLATION
In this area we want to discuss possible approaches for translating a spoken language sentence (in its written form) to a signed utterance representation, e.g. a sequence of glosses.

- How much linguistics and lexicography is required?
- How much world knowledge is required?
- Do you use a corpus to inform the process, and how does the corpus have to look like?
- What triggers the use of sign-specific means of expression, such as productive signs and classifier constructions, or constructed action?
- What are good evaluation measures?
- How much detail must the target signed utterance representation contain?

CHALLENGE 2: SIGN LANGUAGE ANIMATION WITH AVATARS
In this area we want to explore the capabilities of avatars currently used and come up with a definitive list of requirements and future research efforts to produce comprehensible and natural looking avatars.

- What are requirements for the animation representation language?
- Which degree of antropomorphism/photorealism is useful for signing avatars?
- What sensory information from the user (location, gaze, ...) is needed for interactive setups?
- What are relevant evaluation measures for avatars and how are they operationalized?

For the following set of questions we would ask the participants targeting Challenge Area 2 to submit a video that showcases these aspects using the avatar they currently use. This would allow a comprehensive comparison of state-of-the-art technology:

- How does your avatar handle directed signs?
- How does your avatar realize constructed action and constructed dialogue?
- How does your avatar handle multi-party dialogue?
- How much emotional expression can your avatar handle and in which contexts?
- How flexible is the face/mouth and how much flexibility is required?
(e.g. mouthing, visibility of teeth and tongue, puffed-up cheeks, eyebrows, wrinkles)

The workshop is partially funded by the German Federal Ministry of Labour and Social Affairs. Participation at the workshop therefore is free of charge. However, we will not be able to fund travel/accommodation. We would like to ask invitees to notify us about their participation by Dec 08, 2010. Please specify the number of people coming from your group/lab.

***** Important Dates *****

Reply to invitation Dec 08, 2010
Abstract & video submission Dec 22, 2010
Final programme Jan 05, 2011
Workshop Jan 10-11, 2011

Website: http://embots.dfki.de/SLTAT

***** Submission/Contact *****
Please send your reply/submissions to Alexis Heloir (DFKI) under alexis.heloir@dfki.de (for large videos, please get in touch with him to arrange the submission).

***** Organizers *****
Michael Kipp (DFKI)
Alexis Heloir (DFKI)
Thomas Hanke (University of Hamburg)

Related Resources

IJCAI 2015   International Joint Conference on Artificial Intelligence
RecSys 2014   ACM International Conference on Recommender Systems (RecSys) 2014
SLTC 2014   5th Swedish Language Technology Conference
NordiCHI 2014   8th Nordic Conference on Human-Computer Interaction
EMNLP 2014   Conference on Empirical Methods in Natural Language Processing
SMC 2014   2014 IEEE International Conference on Systems, Man and Cybernetics - SMC
CloudCom 2014   The 2014 IEEE International Conference on Cloud Computing Technology and Science
CSCW 2015   18th ACM Conference on Computer Supported Cooperative Work and Social Computing
BigData 2014   The International Conference on Data Mining, Internet Computing, and Big Data (BigData2014)
ICMI 2014   16th International Conference on Multimodal Interaction