posted by user: sherzodhakimov || 1153 views || tracked by 1 users: [display]

MUWS 2023 : The 2nd International Workshop on Multimodal Human Understanding for the Web and Social Media

FacebookTwitterLinkedInGoogle

Link: https://muws-workshop.github.io/
 
When Oct 22, 2023 - Oct 22, 2023
Where Birmingham, UK
Submission Deadline Sep 15, 2023
Notification Due Sep 30, 2023
Final Version Due Oct 15, 2023
Categories    multimodal understanding   semantic relations   rhetoric in online media   human perception understanding
 

Call For Papers

Multimodal human understanding and analysis is an emerging research area that cuts through several disciplines like Computer Vision, Natural Language Processing (NLP), Speech Processing, Human-Computer Interaction, and Multimedia. Several multimodal learning techniques have recently shown the benefit of combining multiple modalities in image-text, audio-visual and video representation learning and various downstream multimodal tasks. At the core, these methods focus on modelling the modalities and their complex interactions by using large amounts of data, different loss functions and deep neural network architectures. However, for many Web and Social media applications, there is the need to model the human, including the understanding of human behaviour and perception. For this, it becomes important to consider interdisciplinary approaches, including social sciences, semiotics and psychology. The core is understanding various cross-modal relations, quantifying bias such as social biases, and the applicability of models to real-world problems. Interdisciplinary theories such as semiotics or gestalt psychology can provide additional insights and analysis on perceptual understanding through signs and symbols via multiple modalities. In general, these theories provide a compelling view of multimodality and perception that can further expand computational research and multimedia applications on the Web and Social media.

The theme of the MUWS workshop, multimodal human understanding, includes various interdisciplinary challenges related to social bias analyses, multimodal representation learning, detection of human impressions or sentiment, hate speech, sarcasm in multimodal data, multimodal rhetoric and semantics, and related topics. The MUWS workshop will be an interactive event and include keynotes by relevant experts, poster and demo sessions, research presentations and discussion.

Modeling human impressions in the context of the Web and Social Media
Cross-modal and semantic relations
Incorporating multi-disciplinary theories such as Semiotics or Gestalt-Theory into multimodal analyses
Measuring and analyzing biases such as cultural bias, social bias, multilingual bias, and related topics in the context of the Web and Social Media
Multimodal human perception understanding
Multimodal sentiment/emotion/sarcasm recognition
Multimodal hate speech detection
Multimodal misinformation detection
Multimodal content understanding and analysis
Multimodal rhetoric in online media

Related Resources

FQAS 2025   CFP: SS Emerging Trends in Data, Web, and Social Media Mining in a New Era of Flexible Query-Answering Systems (FQAS'2025) in Bulgaria
SOTICS 2025   The Fifteenth International Conference on Social Media Technologies, Communication, and Informatics
CPSI 2025   2025 International Conference on Cyber-physical Social Intelligence
CMC-Corpora 2025   12th International Conference on CMC and Social Media Corpora for the Humanities
AIDEM 2025   International Tutorial and Workshop on Artificial Intelligence, Data Analytics and Democracy
ISWC 2025   24th International Semantic Web Conference
ACM MLPR 2025   ACM--2025 The 3rd International Conference on Machine Learning and Pattern Recognition (MLPR 2025)
HUSO 2025   7th Canadian International Conference on Humanities & Social Sciences 2025
SUMAC 2025   The 7th ACM international workshop on analysis, understanding and promotion of heritage contents
BEAM 2025   1st Workshop on Benchmarking and Expanding AI Multimodal Approaches (co-located w/ CVPR 2025)