MAI 2021 : CVPR 2021- Mobile AI workshop and challenges
Call For Papers
MAI: Mobile AI workshop and challenges 2021
In conjunction with CVPR 2021
Contact: radu.timofte [at] vision.ee.ethz.ch
Over the past years, mobile AI-based applications are becoming more and more ubiquitous. Various deep learning models can now be found on any mobile device starting from smartphones running portrait segmentation, image enhancement, face recognition and natural language processing models, to IoT platforms performing real-time image classification or smart-TV boards coming with sophisticated image super-resolution algorithms. The performance of mobile NPUs and DSPs is also increasing dramatically, making it possible to run complex deep learning models and to achieve fast runtime in the majority of tasks.
While many research papers targeted at efficient deep learning models have been proposed recently, the evaluation of the obtained solutions is usually happening on desktop CPUs and GPUs, making it nearly impossible to estimate the actual inference time and memory consumption on real mobile hardware. To address this problem, we introduce the first Mobile AI Workshop, where all solutions and deep learning models will be evaluated on the actual mobile AI accelerators.
The workshop will consist of the three main parts, including: 1) a detailed overview of deep learning inference on mobile platforms, 2) workshop challenges where the participants can get the actual hands-on experience while solving several computer vision tasks and evaluating their solutions on mobile devices, and 3) presentations from mobile SoC vendors covering several important aspects of mobile AI inference.
To ensure that the participants get the most recent and actual information on mobile-related AI tasks, the workshop is designed in collaboration with several major mobile SoC vendors, including Qualcomm, Samsung, Huawei, MediaTek, and Synaptics.
This workshop also builds upon the success of the previous computer vision competitions and is organized by people associated with the NTIRE (CVPR 2017, 2018, 2019 and 2020), CLIC (2018, 2019, 2020), PIRM (2018) and AIM (2019, 2020) workshops.
Papers addressing the topics covering efficient deep learning for mobile devices, mobile-based vision / natural language processing / performance evaluation are invited. The topics include, but are not limited to:
● Efficient deep learning models for mobile devices
● Artifacts removal from mobile photos/videos
● General smartphone photo/video enhancement
● RAW camera image/video processing
● Deep learning applications for mobile camera ISPs
● Image/video super-resolution on low-power hardware
● Portrait segmentation / bokeh effect rendering
● Depth estimation w/o multiple cameras
● Perceptual image manipulation on mobile devices
● Activity recognition using smartphone sensors
● Image/sensor based identity recognition
● Fast image classification / object detection algorithms
● NLP models optimized for mobile inference
● Real-time semantic segmentation
● Low-power machine learning inference
● Machine learning and deep learning frameworks for mobile devices
● AI performance evaluation / benchmarking of mobile and IoT hardware
● Studies and applications of the above problems
A paper submission has to be in English, in pdf format, and at most 8 pages (excluding references) in CVPR style. The paper format must follow the same guidelines as for all CVPR submissions.
The review process is double blind. Authors do not know the names of the chair/reviewers of their papers. Reviewers do not know the names of the authors.
Dual submission is allowed with CVPR main conference only. If a paper is submitted also to CVPR and accepted, the paper cannot be published both at the CVPR and the workshop.
For the paper submissions, please go to the online submission site
Accepted and presented papers will be published after the conference in the CVPR Workshops Proceedings on by IEEE (http://www.ieee.org) and Computer Vision Foundation (www.cv-foundation.org).
The author kit provides a LaTeX2e template for paper submissions. Please refer to the example for detailed formatting instructions. If you use a different document processing system then see the CVPR author instruction page.
Author Kit: http://cvpr2021.thecvf.com/sites/default/files/2020-09/cvpr2021AuthorKit_2.zip
● Regular Papers Submission Deadline: March 15, 2021 (EXTENDED)
● Challenge Papers Submission Deadline: April 02, 2021
● Decisions: April 05, 2021
● Camera Ready Deadline: April 15, 2021
MAI 2021 has the following associated groups of challenges (ONGOING!):
● Learned Smartphone ISP (Evaluation platform: MediaTek Dimensity APU) - Powered by MediaTek
● Image Denoising (Eval. platform: Exynos Mali GPU) - Powered by Samsung
● HDR Image Processing (Eval. platform: Kirin DaVinci NPU) - Powered by Huawei
● Image Super-Resolution (Eval. platform: Synaptics Dolphin NPU) - Powered by Synaptics
● Video Super-Resolution (Eval. platform: Snapdragon Adreno GPU) - Powered by OPPO
● Depth Estimation (Eval. platform: Raspberry Pi 4) - Powered by Raspberry Pi
● Camera Scene Detection (Eval. platform: Apple Bionic) - Powered by CVL
To learn more about the challenges and to participate:
● Release of train data: January 15, 2020
● Validation server online: January 20, 2021
● Competitions end: March 20, 2021
● Andrey Ignatov ( ETH Zurich)
● Radu Timofte ( ETH Zurich)
● Luc Van Gool ( ETH Zurich)
● Martti Ilmoniemi ( Huawei)
● Esin Guldogan ( Huawei)
● Tianyu Yao ( Huawei)
● Cheng-Ming Chiang ( MediaTek Inc.)
● Hsien-Kai Kuo ( MediaTek)
● Kim Byeoung-su ( Samsung Electronics Co., Ltd.)
● Gaurav Arora ( Synaptics Inc.)
● Abdel Younes ( Synaptics Inc.)
● David Plowman ( Raspberry Pi (Trading) Ltd.)
● Heewon Kim ( Seoul National University)
● Kyoung Mu Lee ( Seoul National University)
● Eirikur Agustsson ( Google)
● Chiu Man Ho ( OPPO)
● Zibo Meng ( OPPO)
● Shuhang Gu ( University of Sydney, OPPO)