posted by user: francescabonin || 3438 views || tracked by 4 users: [display]

Special Issue - JMUI 2013 : From Multimodal Analysis to Real-Time Interactions with Virtual Agents - Special Issue Journal on Multimodal User Interfaces (JMUI)

FacebookTwitterLinkedInGoogle

Link: http://fastnet.netsoc.ie/ma3_2012/JMUI-SpecialIssue.html
 
When N/A
Where N/A
Submission Deadline Apr 7, 2013
Notification Due Jun 7, 2013
Final Version Due Sep 7, 2013
 

Call For Papers

(Apologies for multiple reminders)

CALL FOR PAPERS JOURNAL ON MULTIMODAL USER INTERFACES

***************Extended Deadline for paper submission: April 7, 2013******************************

Special Issue
From Multimodal Analysis to Real-Time Interactions with Virtual Agents

Guest editors:
Ronald Böck, Otto von Guericke University Magdeburg, Germany (ronald.boeck@ovgu.de)
Francesca Bonin, Trinity College Dublin, Ireland (boninf@tcd.ie)
Nick Campbell, Trinity College Dublin, Ireland
Jens Edlund, KTH Stockholm, Sweden
Iwan de Kok, University of Twente, The Netherlands (i.a.dekok@utwente.nl)
Ronald Poppe, University of Twente, The Netherlands (r.w.poppe@utwente.nl)
David Traum, USC Institute for Creative Technologies, USA


One of the aims in building multimodal user interfaces is to make the interaction between user and system as natural as possible. Possibly the most natural form of interaction we know is the way we interact with other humans. By building any kind of virtual agents, we aim to recreate this natural form of interaction in human machine interaction.

Development of virtual agent systems that are capable of recreating natural interactions with humans typically involves several steps: record and analyze natural interaction data, extract and recognize relevant multimodal features, craft or learn models from these features, generate the appropriate behavior in real-time based on these models and evaluate the system in a methodologically sound experiment. Each of these steps presents challenges.

The collection of multimodal natural corpora brings the challenges of collecting naturalistic behavior, enhancing the corpus with reliable ground truth annotations and dealing with the inter-personal differences in behavior.

Through multimodal analysis of the corpus, features that need to be extracted in real-time from the recording signals need to be identified. Such features include, but are not limited to, speech and its content, eye gaze, prosodic and paralinguistic features, facial expressions, body movements, or more advanced interpretations of such features such as the affective state, personality, mood or intentions of the user.

By combining these multimodal features and interpretations, the agent needs to reason about, plan and realize the actions it needs to perform in response with the correct timing. These responses include content and paralinguistic features of the speech, appropriate accompanying gestures and facial expressions. Continuous perception, interpretation, reasoning and generation is required to keep the interaction between the user and the virtual agent as natural and fluent as human-human interaction.

The development and evaluation of such complex real-time continuous virtual agent systems also requires new research methodologies in terms of experiment design. Many aspects can influence the evaluation of the system as a whole and individual aspects of the system may require continuous evaluation as well.

For this special issue, we welcome technical, empirical and overview papers that contribute towards the development, evaluation and understanding of real-time interactions with virtual agents.
Topics include:
• Multimodal analysis and annotation of human-human corpora
• Real-time perception and understanding of speech, facial expression and gestures
• Estimation of user affect, mood and personality
• Models for multimodal generation of speech, facial expressions and body movement
• Real-time dialog management
• Real-time interactions with humanoid robots
• Applications of real-time interactions between humans and virtual agents
• Tools, systems and evaluation methods for above
• Comparison of human-human and human-agent interactions


Schedule:
Submission deadline: April 7, 2013 (23.59 UTC-11)
Notification of acceptance: June 7, 2013
Final paper submission: September 7, 2013, (23.59 UTC-11)
Publication: November 2013

Authors Instructions:
Submissions should be 8-12 pages and must not have been previously published, with the exception that substantial extensions of conference and workshop papers (at least 30% new content) can be considered. The authors will be required to follow the Author’s Guide for manuscript submission to the Journal of Multimodal User Interfaces (http://www.springer.com/computer/hci/journal/12193) published by Springer.
During the submission process, please select “REAL-TIME special issue” as article type.

Related Resources

The Human Being and Time: From Kant to E 2024   Call For Papers - The Human Being and Time: From Kant to Existentialism (second call)
RTAS 2024   IEEE Real-Time and Embedded Technology and Applications Symposium
ISORC 2024   International Symposium on Object/Component/Service-Oriented Real-Time Distributed Computing
DataMod 2024   12th International Symposium DataMod 2024: From Data to Models and Back
RT-Cloud 2024   Third International Workshop on Real-Time Cloud Systems
EMSA 2024   13th International Conference on Embedded Systems and Applications
CPSAT 2024   The 5th CPSSI International Symposium on Cyber-Physical Systems (Applications and Theory)
ML-DE@ECAI 2024   Machine Learning Meets Differential Equations: From Theory to Applications
RTCSA 2024   30th IEEE International Conference on Embedded and Real-Time Computing Systems and Applications
CoSeRa 2024   The 6th International Workshop on the Theory of Computational Sensing and its applications to Radar, Multimodal Sensing, and Imaging (CoSeRa 2024)