posted by user: jalvarezlopez || 755 views || tracked by 5 users: [display]

EfficientConvNets 2017 : Challenge on Efficient ConvNets for Semantic Segmentation

FacebookTwitterLinkedInGoogle

Link: http://www.deep-driving.net/
 
When Jun 11, 2017 - Jun 11, 2017
Where Redondo Beach
Submission Deadline May 1, 2017
Notification Due May 8, 2017
Final Version Due Jun 11, 2017
Categories    autonomous driving   deep learning   semantic segmentation   efficient deep learning
 

Call For Papers

The recent advances in computer vision have been mainly driven by deep learning, with no end in sight. Autonomous driving is a very exciting application domain for a lot of computer vision problems. In a vehicle, energy resources are very limited. At the same time, there is a trend towards more sensors with higher resolutions. This exposes a contradiction in autonomous driving: huge amounts of data have to be processed as fast, accurate and with as little power consumption as possible.

This workshop will encourage work on efficient neural networks in the context of autonomous driving. We challenge the IV community to participate in a semantic segmentation challenge, offering a prize for the winner. The workshop program will include invited talks, presentations of the top 3 challenge winners and a poster session of submitted work.

Challenge Details
The prize for the winning team is awarded by a jury. The prize will be given to the team showing the best compromise between high accuracy, high speed in frames/second and low power consumption. The winner team is required to give a detailed presentation (with reproducible results) at the workshop. Papers about the work are optional but highly welcome to a special issue in IEEE T-IV.

The accuracy of the system will be measured following the Cityscapes1,2 benchmark for class segmentation. Testing will be done using unpublished data taken with the same car and camera system that was used for the Cityscapes sequences. As a consequence, you have to process 2048x1024px RGB images and output 2048x1024px maps of labels according to the Cityscapes label set3. Prior to the deadline, a sequence of roughly one minute will be supplied to interested teams. The results (whole sequence) have to be submitted within a day, using the same labels and format as known from the standard Cityscapes benchmark.

For a fair comparison, submitting teams have to measure the speed of their solution and to specify the used HW.

The deadline for submissions is May, 1st. Participants are requested to render a video of the entire sequence and to show the results during their presentation at the workshop.

References:
[1] Cityscapes Dataset: M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele, "The Cityscapes Dataset for Semantic Urban Scene Understanding," in Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. https://arxiv.org/abs/1604.01685
[2] Cityscapes Homepage: https://www.cityscapes-dataset.com/
[3] Cityscapes Github: https://github.com/mcordts/cityscapesScripts
For the labelset see: https://github.com/mcordts/cityscapesScripts/blob/master/cityscapesscripts/helpers/labels.py

Related Resources

DL-NLP 2017   Call for Special issue on Deep learning techniques for Natural Language Processing
NSDI 2017   14th USENIX Symposium on Networked Systems Design and Implementation
MLAIJ 2017   Machine Learning and Applications: An International Journal
ICSC 2018   12th IEEE International Conference on Semantic Computing (ICSC 2018)
VSI: DL-Fusion 2018   Special issue On “Deep Learning for Information Fusion” - Information Fusion (Elsevier)
KESW 2017   Knowledge Engineering and Semantic Web
IoMT Big Data 2017   Towards Smarter Cities: Learning from Internet of Multimedia Things-Generated Big Data
JIST 2017   7th Joint International Semantic Technology Conference
OKE Open Challenge 2017   Open Knowledge Extraction (OKE) Open Challenge 2017-2018
INNS-BDDL 2018   The 3rd INNS Conference on Big Data and Deep Learning 2018