posted by organizer: ringevaf || 1920 views || tracked by 7 users: [display]

AV+EC 2015 : AV+EC 2015 - 5th International Audio/Visual+ Emotion Challenge and Workshop

FacebookTwitterLinkedInGoogle

Link: http://sspnet.eu/avec2015
 
When Oct 26, 2015 - Oct 30, 2015
Where Brisbane, Australia
Submission Deadline Jul 1, 2015
Notification Due Jul 16, 2015
Final Version Due Jul 31, 2015
Categories    affective computing   multimodality   features extraction   machine learning
 

Call For Papers

Dear colleagues,



We have the great pleasure to announce the opening of the CFP to the 5th International Audio/Visual+ Emotion Challenge and Workshop (AV+EC 2015), organised in conjunction with ACM Multimedia 2015. See below for the CFP - apologies for potential cross-posting.



_____________________________________________________________


Call for Participation / Papers



5th International Audio/Visual+ Emotion Challenge and Workshop (AV+EC 2015)



in conjunction with ACM Multimedia 2015, October 26-30, Brisbane, Australia



http://sspnet.eu/avec2015/

http://www.acmmm.org/2015/



Register and download data and features:

http://sspnet.eu/avec2015/challenge-guidelines/



_____________________________________________________________


Scope

The Audio/Visual Emotion Challenge and Workshop (AV+EC 2015) “Bridging Across Audio, Video and Physio” will be the fifth competition event aimed at comparison of multimedia processing and machine learning methods for automatic audio, visual and – for the first time also – physiological emotion analysis, with all participants competing under strictly the same conditions.
 
The goal of the Challenge is to provide a common benchmark test set for multimodal information processing and to bring together the audio, video and physiological emotion recognition communities, to compare the relative merits of the three approaches to emotion recognition under well-defined and strictly comparable conditions and establish to what extent fusion of the approaches is possible and beneficial. A second motivation is the need to advance emotion systems to be able to deal with fully naturalistic behaviours in large volumes of unsegmented, non-prototypical and non-preselected data, as this is exactly the type of data that both multimedia and human-machine/human-robot communication interfaces have to face in the real world.
 
We are calling for teams to participate in a Challenge of fully-continuous emotion detection from audio, or video, or physiological data, or any combination of these three modalities. As benchmarking database the RECOLA multimodal corpus of remote and collaborative affective interactions will be used. Emotion will have to be recognized in terms of continuous time, continuous valued dimensional affect in two dimensions: arousal and valence.
 
Besides participation in the Challenge we are calling for papers addressing the overall topics of this workshop, in particular works that address the differences between audio, video and physiological processing of emotive data, and the issues concerning combined audio-visual-physiological emotion recognition.


Topics include, but are not limited to:
Participation in the Challenge
 
Audio/Visual/Physiological Emotion Recognition:
Audio-based Emotion Recognition
Video-based Emotion Recognition
Physiology-based Emotion Recognition
Synchrony of Non-Stationary Time Series
Multi-task learning of Multiple Dimensions
Weakly Supervised Learning
Agglomeration of Learning Data
Context in Emotion Recognition
Multiple Rater Ambiguity and Asynchrony
 
Application:
Multimedia Coding and Retrieval
 


___________________________________________


Important Dates

Paper submission

July 1, 2015



Notification of acceptance

July 16, 2015



Final challenge result submission
24 July, 2015


Camera ready paper

July 31, 2015



Workshop

October 26 or 30, 2015


___________________________________________





Organisers

Fabien Ringeval (Tech. Univ. Munich, Germany)
Björn Schuller (Imperial College London / Univ. Passau, UK / Germany)

Michel Valstar (University of Nottingham, UK)

Roddy Cowie (Queen's University Belfast, UK)

Maja Pantic (Imperial College London, UK)





___________________________________________
Program Committee

Felix Burkhardt, Deutsche Telekom, Germany
Rama Chellappa, University of Maryland, USA
Fang Chen, NICTA, Australia
Mohamed Chetouani, Université Pierre et Marie Curie
Jeffrey Cohn, University of Pittsburgh, USA
Laurence Devillers, Université Paris-Sud, France
Julien Epps, University of New South Wales, Australia
Anna Esposito, University of Naples, Italy
Roland Goecke, University of Canberra, Australia
Jarek Krajewski, Universität Wuppertal, Germany
Marc Mehu, Webster Vienna Private University, Austria
Louis-Philippe Morency, Carnegie Mellon University, USA
Stefan Scherer, University of Southern California, USA
Stefan Steidl, Universität Erlangen-Nuremberg, Germany
Jianhua Tao, Chinese Academy of Sciences, China
Matthew Turk, University of California, USA
Stefanos Zafeiriou, Imperial College London, UK



Please regularly visit our website http://sspnet.eu/avec2015 for more information and excuse cross-postings,




Thank you very much and all the best,



Fabien Ringeval, Björn Schuller, Michel Valstar, Roddy Cowie and Maja Pantic

Related Resources

IJSE-SI-EMOM 2017   Special Issue on: Advances in Emotion mining and Opinion mining(IJSE)
ACML 2017   The 9th Asian Conference on Machine Learning
NTIRE 2017   CVPR 2017- New Trends in Image Restoration and Enhancement workshop and challenge on image super-resolution
ICPR 2018   24th International Conference on Pattern Recognition
OKE 2017   OKE2017 - Open Knowledge Extraction Challenge @ESWC2017
VISAPP 2018   International Conference on Computer Vision Theory and Applications
NLIWOD & QALD 8 2017   NLIWOD and QALD 8 challenge - The Third International Workshop on Natural Language Interfaces for the Web of Data & 8th Question Answering over Linked Data challenge
ICMV 2017   2017 The 10th International Conference on Machine Vision (ICMV 2017) - SPIE
HUSO 2017   The Third International Conference on Human and Social Analytics
QALD 2017   7th Question Answering over Linked Data Challenge