MuSe 2021 2021 : The 2nd International Multimodal Sentiment Analysis in Real-life Media Challenge and Workshop @ ACM Multimedia 2021, October 2021, Chengdu, China
Call For Papers
MuSe 2021 focuses on multimodal sentiment/ emotion recognition of user-generated content and in stress-induced situations: https://www.muse-challenge.org/muse2021. This year, we focus on audio-visual and textual, as well as biological signals (inc. heart rate and galvanic skin response).
== CHALLENGES ==
1. Multimodal Continuous Emotions in-the-Wild Sub-challenge (MuSe-Wilder)
Predicting the level of emotional dimensions (arousal, valence) in a time-continuous manner from audio-visual recordings.
2. Multimodal Sentiment Sub-challenge (MuSe-Sent)
Predicting advanced intensity classes of emotions based on valence and arousal for segments of audio-visual recordings.
3. Multimodal Emotional Stress Sub-challenge (MuSe-Stress)
Predicting the level of emotional arousal and valence in a time-continuous manner from audio-visual recordings.
4. Multimodal Physiological-Arousal Sub-challenge (MuSe-Physio)
Predicting the level of psycho-physiological arousal from a) human annotations fused with b) galvanic skin response (also known as Electrodermal Activity (EDA)) signals from the stressed people as a regression task. Audio-visual recordings, as well as other biological signals (heart rate and respiration), are offered for modelling.
== Datasets ==
- MuSe-CaR: The MuSe-Wilder and MuSe-Sent sub-challenges use the MuSe-CAR database. It is a large (~40h), a multimodal dataset that has been gathered in-the-wild with the intention of further understanding Multimodal Sentiment Analysis, e.g., the emotional engagement that takes place during product reviews (i.e., automobile reviews) where sentiment is linked to a topic or entity.
- Ulm-TSST: For MuSe-Physio and MuSe-Stress, the novel Ulm-TSST database provides a multimodal dataset of emotion and mental well-being. It includes biological recordings, such as Electrocardiogram (ECG), Electrodermal Activity (EDA), Respiration, and Heart Rate (BPM) as well as continuous arousal and valence annotations. With 105 participants aged between 18 and 39 years, a total of 10 hours were accumulated.
The test set labels are held back by the organizers for evaluation.
== Features ==
Besides the raw data, video (Facial Action Units, VGGFace, Xception), audio (VGGish, EGeMAPS, DeepSpectrum), and text (BERT) features are provided.
== How to Participate ==
Instructions are available at https://www.muse-challenge.org/participate.
Data, features and baseline model (code, weights) are available and described in a preliminary paper.
== Organisers ==
Björn W. Schuller (Imperial College London/ audEERING, UK)
Eva-Maria Meßner (University of Ulm, GER)
Erik Cambria (Nanyang Technological University/ SenticNet, SG)
Guoying Zhao (University of Oulu)
Lukas Stappen (University of Augsburg/ BMW Group, GER)