| |||||||||||||
VTU 2022 : The AAAI-2022 Workshop On Video Transcript Understanding | |||||||||||||
Link: http://vtuworkshop.github.io | |||||||||||||
| |||||||||||||
Call For Papers | |||||||||||||
Thanks to the development of the Internet and video services such as YouTube, Facebook, and Twitch, people can easily share their own videos with people across continents on a daily basis. Along with books, these videos have become a new source of knowledge. However, the quality of the information in some of these videos is sometimes questionable and may contain unintentional or intentional misinformation, and political bias.
Moreover, the COVID-19 pandemic has resulted in the widespread adoption of remote working, remote learning, and remote conferencing. These remote working environments demand many new applications for efficient video transcript understanding, such as meeting recording understanding, quality assurance in call centers, and automatic test scoring in educational testing. The recent advancements in methods and resources for speech recognition have also created more research opportunities around video transcript understanding. Topics The VTU workshop holds a research track and a shared task track. The research track aims to explore recent advances and remaining challenges on video transcript understanding. As this topic is a multi-modal subject, researchers from artificial intelligence, computer vision, speech processing, natural language processing, data mining, statistics, and other fields are invited to submit papers on recent advances, resources, tools, challenges for video transcript understanding. To this end, the topics of the workshop include but are not limited to the following: Fundamental processing for video transcript, such as punctuation restoration, chunking, parsing and named entity recognition. Subtitle segmentation Text summarization and keyword extraction for transcripts Event extraction, intent detection, and slot filling Sentiment analysis for speech text processing Noisy text processing Fact-checking, evidence extraction Question-Answering extraction from transcripts Automatic Speech Recognition, and related systems such as speaker identification and filler word detection Multi-modal, multilingual video-speech-text processing Important Dates VTU workshop at AAAI 2022: Feb 28-Mar 1, 2022 Camera-ready submission: Dec 10, 2021 Acceptance notification: Dec 3, 2021 Paper submission: Nov 12, 2021 All deadlines are “anywhere on earth” (UTC-12) Submission Authors are invited to submit their unpublished work that represents novel research. The papers should be written in English using the AAAI-22 author kit and follow the AAAI 2022 formatting guidelines. Authors can also submit the supplementary materials, including technical appendices, source codes, datasets, and multimedia appendices. All submissions, including the main paper and its supplementary materials, should be fully anonymized. For more information on formatting and anonymity guidelines, please refer to AAAI 2022 call for paper page. All papers will be double-blind peer-reviewed. VTU accepts both long papers and short papers: Short Paper: Up to 4 pages of content including the references. Upon acceptance, the authors are provided with 1 more page to address the reviewer comments. Long Paper: Up to 8 pages of content including the references. Upon acceptance, the authors are provided with 1 more page to address the reviewer comments. Two reviewers with the same technical expertise will review each paper. Authors of the accepted papers will present their work in either the Oral or Poster session. All accepted papers will appear on the workshop proceedings that will be published on CEUR-WS. The authors will keep the copyright of their papers that are published on CEUR-WS. The workshop proceedings will be indexed by DBLP. The paper must be submitted using EasyChair. For information on System Paper submission for the share tasks, please refer to our shared tasks page. |
|