| |||||||||
WWW 2021 : Call for papers: Self-supervised learning for the Web | |||||||||
Link: https://www.aminer.cn/ssl_www2021 | |||||||||
| |||||||||
Call For Papers | |||||||||
Dear All,
We are organizing a workshop on Self-Supervised Learning for the Web (SSL) at the Web Conference 2021. The latest information can be found on our website: https://www.aminer.cn/ssl_www2021 ======================= Important Dates Submission Deadline: Mar. 01, 2021 Notification: [extended] Workshop Date: April, 2021 ======================= Overview Over the past few years, self-supervised learning has achieved great success across a variety of domains, such as natural language processing, computer vision, and speech recognition. The promise of self-supervised learning is to leverage input data itself as the supervision signals for learning models that could be as powerful as techniques with dedicated label information. In other words, it does not require task-specific label data, which is often arduously expensive to obtain at scale. %, specifically for the Web data. Though its soaring performance for tasks on text, image datasets, self-supervised learning for problems on the Web, e.g., retrieval, recommendation, graph mining, social network analysis, is largely unexplored. As we know, the Web is a treasure trove of user experiences, knowledge, and multi-modal data that present great opportunities to achieve artificial intelligence. The Web itself, with huge amount of multi-modal data---text, image, graph, etc., and multiple types of entities (e.g., users, documents, organizations, etc.), user behaviors, and relations between entities, has become so large and complex that traditional methodologies are inadequate. Over the last two decades, the conventional paradigm of mining and learning the Web data usually involves massive scale of manual effort in data labeling, most of which require extensive and specific domain knowledge. In light of these issues faced, it is more than promising to leverage the power of self-supervised learning to facilitate classical Web mining tasks. Therefore, we propose to host a dedicated workshop to explore the potential of self-supervised learning for the Web at WWW 2021. The workshop is timely for connecting scholars who are working on self-supervised learning in machine learning, nlp, computer vision, graph neural networks, deep learning, artificial intelligence, and Web community to ignite inspirations. It will also offer a platform for discussing and identifying its main challenges, future directions and opportunities. ======================= Call for Papers The SSL@WWW2021 workshop on self-supervised learning for the Web is related to machine learning, deep learning, representation learning, natural language processing, computer vision, graph mining, and Web mining. The program of the workshop will focus on presenting and discussing the state-of-the-art, open problems, challenges and latest models, techniques and algorithms in the field of self-supervised learning, with a focus on its applications for the Web. In this context, topics of interest include but are not limited to: Self-supervised learning theories; Self-supervised learning algorithms; Pre-training models and techniques; Generative pre-training; Contrastive learning; Representation learning; Self-supervised learning for text; Self-supervised learning for image; Self-supervised learning for graph; Self-supervised learning for multi-modal data; Graph neural networks; Transformer and attention networks; Knowledge graph embedding; ======================= Submission Guidelines The SSL@WWW2021 workshop encourages submissions that present both original results and preliminary/existing work on this topic. We explicitly welcome extended-abstract submissions to introduce preliminary and arXiv work on related topics, as well as recently-published research at top conferences. The extended abstracts can option to be not archived in the WWW Companion proceeding. Therefore, this workshop accepts both full papers (at least 5 pages) for original results and extended abstracts (2 to 4 pages) for published or ongoing work. All submissions must conform to the WWW 2021 main conference submission format: the ACM SIG Proceedings template with a font size no smaller than 9pt. The SSL@WWW2021 follows a single-blind review process, and the author and affiliation information should be listed. Submission site at easychair: https://easychair.org/conferences/?conf=sslwww2021 ======================= PC Chairs Xiao Liu | Tsinghua University | China Ziniu Hu | UCLA | USA Chairs Jie Tang | Tsinghua University | China Yuxiao Dong | Facebook AI | USA Yizhou Sun | UCLA | USA Zhilin Yang | Recurrent.ai | China |
|