posted by user: yangl || 3662 views || tracked by 13 users: [display]

ICMI-MLMI 2009 : The Eleventh International Conference on Multimodal Interfaces and Workshop on Machine Learning for Multi-modal Interaction


When Nov 2, 2009 - Nov 6, 2009
Where Cambridge, MA
Submission Deadline May 22, 2009
Notification Due Jul 20, 2009
Final Version Due Aug 20, 2009
Categories    HCI   AI   NLP   ML

Call For Papers

The Eleventh International Conference on Multimodal Interfaces and The Sixth Workshop on Machine Learning for Multimodal Interaction will jointly take place in the Boston area from November 2-6, 2009. The main aim of ICMI-MLMI 2009 is to further scientific research within the broad field of multimodal interaction, methods and systems. This joint conference will focus on major trends and challenges in this area, and work to identify a roadmap for future research and commercial success. ICMI-MLMI 2009 will feature a single-track main conference with keynote speakers, panel discussions, technical paper presentations, poster sessions, and demonstrations of state of the art multimodal systems and concepts. It will be followed by workshops.


The conference will take place at the MIT Media Lab, widely known for its innovative spirit. Organized in Cambridge, Massachusetts, USA, ICMI-MLMI 2009 provides an excellent setting for brainstorming and sharing the latest advances in multimodal interaction, systems, and methods in a city known as one of the top historical, technological, and scientific centers of the US.

Important dates:

Workshop proposals March 1, 2009
Special Sessions proposals March 1, 2009
Paper submission May 22, 2009
Author notification July 20, 2009
Camera-ready due August 20, 2009
Conference Nov 2-4, 2009
Workshops Nov 5-6, 2009

Topics of interest:

Multimodal and multimedia processing:

Algorithms for multimodal fusion and multimedia fission
Multimodal output generation and presentation planning
Multimodal discourse and dialogue modeling
Generating non-verbal behaviors for embodied conversational agents
Machine learning methods for multimodal processing

Multimodal input and output interfaces:

Gaze and vision-based interfaces
Speech and conversational interfaces
Pen-based interfaces
Haptic interfaces
Interfaces to virtual environments or augmented reality
Biometric interfaces combining multiple modalities
Adaptive multimodal interfaces

Multimodal applications:

Mobile interfaces
Meeting analysis and intelligent meeting spaces
Interfaces to media content and entertainment
Human-robot interfaces and human-robot interaction
Vehicular applications and navigational aids
Computer-mediated human to human communication
Interfaces for intelligent environments and smart living spaces
Universal access and assistive computing
Multimodal indexing, structuring and summarization

Human interaction analysis and modeling:

Modeling and analysis of multimodal human-human communication
Audio-visual perception of human interaction
Analysis and modeling of verbal and non-verbal interaction
Cognitive modeling of users of interactive systems

Multimodal data, evaluation, and standards:

Evaluation techniques and methodologies for multimodal interfaces
Authoring techniques for multimodal interfaces
Annotation and browsing of multimodal data
Architectures and standards for multimodal interfaces

Paper Submission:

There are two different submission categories: regular paper and short paper. The page limit is 8 pages for regular papers and 4 pages for short papers. The presentation style (oral or poster) will be decided by the committee based on suitability and schedule.

Demo Submission:

Proposals for demonstrations shall be submitted to demo chairs electronically. A two page description with photographs of the demonstration is required.

Doctoral Spotlight:

Funds are expected from NSF to support participation of doctoral candidates at ICMI-MLMI 2009, and a spotlight session is planned to showcase ongoing thesis work. Students interested in travel support can submit a short or long paper as specified above.

Submission & review web site:

Organizing committee

General Co-Chairs:
James L. Crowley, INRIA, Grenoble, France
Yuri A. Ivanov, MERL, Cambridge, USA
Christopher R. Wren, Google, Cambridge, USA

Program Co-Chairs:
Daniel Gatica-Perez, Idiap Research Institute, Martigny, Switzerland
Michael Johnston, AT&T Labs Research, Florham Park, USA
Rainer Stiefelhagen, University of Karlsruhe, Germany

Janet McAndless, MERL, Cambridge, USA

Hervé Bourlard, Idiap Research Institute, Martigny, Switzerland

Student Chair
Rana el Kaliouby, MIT Media Lab, Cambridge, USA

Student Volunteer Chair
Matthew Berlin, MIT Media Lab, Cambridge, USA

Local Arrangements
Clifton Forlines, MERL, Cambridge, USA
Deb Roy, MIT Media Lab, Cambridge, USA
Thanks to Cole Krumbholz, MITRE, Bedford, USA

Sonya Allin , University of Toronto, Canada
Yang Liu, University of Texas at Dallas, USA

Louis-Philippe Morency, University of South California, USA

Xilin Chen, Chinese Academy of Sciences, China
Steve Renals, University of Edinburgh, Scotland

Denis Lalanne, University of Fribourg, Switzerland
Enrique Vidal, Polytechnic University of Valencia, Spain

Kenji Mase, Nagoya University, Japan

Related Resources

ICMI 2021   International Conference on Multimodal Interfaces
IJAD 2021   International Journal of Advanced Dermatology
MuSe 2021   The 2nd International Multimodal Sentiment Analysis in Real-life Media Challenge and Workshop @ ACM Multimedia 2021, October 2021, Chengdu, China
MECHATROJ 2021   Mechatronics and Applications: An International Journal (MECHATROJ)
WSDM 2022   Web Search and Data Mining
IJBISS 2021   International Journal of Business Information Systems Strategies
ML_BDA 2021   Special Issue on Machine Learning Technologies for Big Data Analytics
IJAB 2021   International Journal of Advances in Biology
blockchain_ml_iot 2021   Network and Electronics (MDPI) Joint Special Issue - Blockchain and Machine Learning for IoT: Security and Privacy Challenges