posted by organizer: ringevaf || 2189 views || tracked by 7 users: [display]

AV+EC 2015 : AV+EC 2015 - 5th International Audio/Visual+ Emotion Challenge and Workshop


When Oct 26, 2015 - Oct 30, 2015
Where Brisbane, Australia
Submission Deadline Jul 1, 2015
Notification Due Jul 16, 2015
Final Version Due Jul 31, 2015
Categories    affective computing   multimodality   features extraction   machine learning

Call For Papers

Dear colleagues,

We have the great pleasure to announce the opening of the CFP to the 5th International Audio/Visual+ Emotion Challenge and Workshop (AV+EC 2015), organised in conjunction with ACM Multimedia 2015. See below for the CFP - apologies for potential cross-posting.


Call for Participation / Papers

5th International Audio/Visual+ Emotion Challenge and Workshop (AV+EC 2015)

in conjunction with ACM Multimedia 2015, October 26-30, Brisbane, Australia

Register and download data and features:



The Audio/Visual Emotion Challenge and Workshop (AV+EC 2015) “Bridging Across Audio, Video and Physio” will be the fifth competition event aimed at comparison of multimedia processing and machine learning methods for automatic audio, visual and – for the first time also – physiological emotion analysis, with all participants competing under strictly the same conditions.
The goal of the Challenge is to provide a common benchmark test set for multimodal information processing and to bring together the audio, video and physiological emotion recognition communities, to compare the relative merits of the three approaches to emotion recognition under well-defined and strictly comparable conditions and establish to what extent fusion of the approaches is possible and beneficial. A second motivation is the need to advance emotion systems to be able to deal with fully naturalistic behaviours in large volumes of unsegmented, non-prototypical and non-preselected data, as this is exactly the type of data that both multimedia and human-machine/human-robot communication interfaces have to face in the real world.
We are calling for teams to participate in a Challenge of fully-continuous emotion detection from audio, or video, or physiological data, or any combination of these three modalities. As benchmarking database the RECOLA multimodal corpus of remote and collaborative affective interactions will be used. Emotion will have to be recognized in terms of continuous time, continuous valued dimensional affect in two dimensions: arousal and valence.
Besides participation in the Challenge we are calling for papers addressing the overall topics of this workshop, in particular works that address the differences between audio, video and physiological processing of emotive data, and the issues concerning combined audio-visual-physiological emotion recognition.

Topics include, but are not limited to:
Participation in the Challenge
Audio/Visual/Physiological Emotion Recognition:
Audio-based Emotion Recognition
Video-based Emotion Recognition
Physiology-based Emotion Recognition
Synchrony of Non-Stationary Time Series
Multi-task learning of Multiple Dimensions
Weakly Supervised Learning
Agglomeration of Learning Data
Context in Emotion Recognition
Multiple Rater Ambiguity and Asynchrony
Multimedia Coding and Retrieval


Important Dates

Paper submission

July 1, 2015

Notification of acceptance

July 16, 2015

Final challenge result submission
24 July, 2015

Camera ready paper

July 31, 2015


October 26 or 30, 2015



Fabien Ringeval (Tech. Univ. Munich, Germany)
Björn Schuller (Imperial College London / Univ. Passau, UK / Germany)

Michel Valstar (University of Nottingham, UK)

Roddy Cowie (Queen's University Belfast, UK)

Maja Pantic (Imperial College London, UK)

Program Committee

Felix Burkhardt, Deutsche Telekom, Germany
Rama Chellappa, University of Maryland, USA
Fang Chen, NICTA, Australia
Mohamed Chetouani, Université Pierre et Marie Curie
Jeffrey Cohn, University of Pittsburgh, USA
Laurence Devillers, Université Paris-Sud, France
Julien Epps, University of New South Wales, Australia
Anna Esposito, University of Naples, Italy
Roland Goecke, University of Canberra, Australia
Jarek Krajewski, Universität Wuppertal, Germany
Marc Mehu, Webster Vienna Private University, Austria
Louis-Philippe Morency, Carnegie Mellon University, USA
Stefan Scherer, University of Southern California, USA
Stefan Steidl, Universität Erlangen-Nuremberg, Germany
Jianhua Tao, Chinese Academy of Sciences, China
Matthew Turk, University of California, USA
Stefanos Zafeiriou, Imperial College London, UK

Please regularly visit our website for more information and excuse cross-postings,

Thank you very much and all the best,

Fabien Ringeval, Björn Schuller, Michel Valstar, Roddy Cowie and Maja Pantic

Related Resources

EmotionAware 2018   2nd International Workshop on Emotion Awareness for Pervasive Computing with Mobile and Wearable Devices
ICDM 2018   IEEE International Conference on Data Mining
Face & Emotion Recognition - TSP 2018   Robust Face and Emotion Recognition and Analysis Special Session on TSP 2018 | IEEE R8 | IEEE Xplore® | SCOPUS
ACML 2018   The 10th Asian Conference on Machine Learning
HUSO 2018   The Fourth International Conference on Human and Social Analytics
ICANN 2018   27th International Conference on Artificial Neural Networks
WWW- SVQA 2018   Challenge@WWW 2018: Semantic Visual Question Answering
ECML-PKDD 2018   European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases
NLIWoD-4 & QALD-9 2018   4th International Workshop on Natural Language Interfaces for Web of Data (NLIWoD) & 9th Question Answering over Linked Data challenge
FiQA 2018   Challenge@WWW 2018: Financial Opinion Mining and Question Answering