posted by organizer: dirk || 1777 views || tracked by 2 users: [display]

MMAutomotive 2018 : Multimodal Interaction in Automotive Applications


When N/A
Where N/A
Submission Deadline Feb 5, 2018
Notification Due Mar 15, 2018
Final Version Due Apr 28, 2018
Categories    automotive   multimodality   interaction

Call For Papers

Multimodal Interaction in Automotive Applications

With the smartphone becoming ubiquitous, pervasive distributed computing is becoming a reality. Increasingly, aspects of the internet of things find their way into many aspects of our daily lives. Users are interacting multimodally with their smartphones and expectations with regard to natural interaction have increased dramatically in the past years. Even more, users have started to project these expectations towards all kind of interfaces encountered in their daily lives. Currently, these expectations are not yet fully met by car manufacturers since the automotive development cycles are still much longer compared to software industry. However, the clear trend is that manufacturers add technology to cars to deliver on their vision and promise of a safer drive. Multiple modalities are already available in today’s dashboards, including haptic controllers, touch screens, 3D gestures, voice, secondary displays, and gaze.
In fact, car manufacturers are aiming for a personal assistant with deep understanding of the car and an ability to meet driving-related demands and non-driving-related needs to get the job done. For instance, such an assistant can naturally answer any question about the car and help schedule service when needed. It can find the preferred gas station along the route, or even better – plan a stop and ensure to arrive in time for a meeting. It understands that a perfect business meal involves more than finding a sponsored restaurant, and includes unbiased reviews, availability, budget, trouble-free parking and notifies all invitees of the meeting time and location. Moreover, multimodality can be a source for fatigue detection. The main goal for multimodal interaction and driver assistance systems is on ensuring that the driver can focus on his primary task of a safe drive.

This is why the biggest innovations in today’s cars happened in the way we interact with the integrated devices such as the infotainment system. For instance, it has been shown that voice based interaction is less distractive than interaction with visual haptic interface, but it is only one piece in the way we interact multimodally in today’s cars, shifting away from the GUI as the only source of interaction. This also leads to additional efforts to establish a mental model for the user. With the plethora of available modalities requiring multiple mental maps, learnability decreased considerably. Multimodality may also help here to decrease distraction. In the special issue we will present the challenges and opportunities of multimodal interaction to help reducing cognitive load and increase learnability as well as current research that has the potential to be employed in tomorrow’s cars.
In this special issue, we especially invite researchers, scientists, and developers to submit contributions that are original and unpublished and have not been submitted to any other journal, magazine, or conference. We expect at least 30% of novel content. We are soliciting original research related to multimodal smart and interactive media technologies in areas including - but not limited to - the following:
* In-vehicle multimodal interaction concepts
* Multimodal Head-Up Displays (HUDs) and Augmented Reality (AR) concepts
* Reducing driver distraction and cognitive load and demand with multimodal interaction
* (pro-active) in-car personal assistant systems
* Driver assistance systems
* Information access (search, browsing etc) in the car
* Interfaces for navigation
* Text input and output while driving
* Biometrics and physiological sensors as a user interface component
* Multimodal affective intelligent interfaces
* Multimodal automotive user-interface frameworks and toolkits
* Naturalistic/field studies of multimodal automotive user interfaces
* Multimodal automotive user-interface standards
* Detecting and estimating user intentions employing multiple modalities

Guest Editors
Dirk Schnelle-Walka, Harman International, Connected Car Division, Germany
Phil Cohen, Voicebox, USA
Bastian Pfleging, Ludwig-Maximilians-Universität München, Germany

Submission Instructions

1-page abstract submission: 05.02.2017
Invitation for full submission: 15.03.2018
Full Submission: 28.04.2018
Notification about acceptance: 15.06.2018
Final article submission: 15.07.2018
Tentative Publication: ~ 09/2018

Companion website:

Authors are requested to follow instructions for manuscript submission to the Journal of Multimodal User Interfaces ( and to submit manuscripts at the following link:

Related Resources

RiTA 2021   9th International Conference on Robot Intelligence Technology and Applications
CSCS 2021   ACM Computer Science in Cars Symposium
HUCAPP 2022   6th International Conference on Human Computer Interaction Theory and Applications
RiTA 2021   9th International Conference on Robot Intelligence Technology and Applications
ICMI 2021   International Conference on Multimodal Interfaces
SI : Multimodal Datasets in smart spaces 2021   Special Issue: Multisensor and Multimodal Datasets in Intelligent Home for Context-Awareness, Human Home Interaction and Dialogue
CMAAE 2021   2021 3rd International Conference on Mechanical, Aerospace and Automotive Engineering (CMAAE 2021)
DASFAA 2022   Database Systems for Advanced Applications
NeuRec@ICDM 2021   2nd International Workshop on Neural Recommender Systems @ ICDM-21
SPECOM 2021   23rd International Conference on Speech and Computer