posted by user: NCAA || 912 views || tracked by 1 users: [display]

Image and Vision Computing Special Issue 2020 : Image and Vision Computing (IVC) Special Issue on Deep Cross-Media Neural Model for Generating Image Descriptions

FacebookTwitterLinkedInGoogle

Link: https://www.journals.elsevier.com/image-and-vision-computing/call-for-papers/neural-model-for-generating-image-descriptions
 
When N/A
Where N/A
Submission Deadline TBD
Categories    deep cross-media neural model   image understanding   image descriptions
 

Call For Papers

Summary and Scope:

Understanding and generating image descriptions (UGID) are hot topics that combines the computer vision (CV) and natural language processing (NLP). UGID has broad application prospects in many fields of AI. Different from coarse-grained image understanding of independent labeling, the image description task needs to learn the natural language descriptions of images. This requires not only the model to recognize the objects in the image, but also other visual elements (e.g., actions and attributes of objects), but also understand the interrelationships between objects and generate human-readable description sentences, which is challenging. The real image understanding is to describe image with natural language and let the machine emulate humans for better human-computer interaction. With the fast development of deep learning in the fields of CV and NLP, the encoder-decoder based deep neural models have obtained breakthrough results in generating image descriptions in cross-media domains. As such, the image understanding may become a reality in future. However, current models can only provide a simple description about image, i.e., the number of descriptive words is usually limited and even the sentences are logically wrong.

In this special issue, we invite the original contributions from diverse research fields, developing new deep cross-media neural model for understanding and generating image descriptions, which aims to reduce the gap between image understanding and natural language descriptions.

The topics of interest include, but are not limited to:

Attention guided UGID Visual relationship in UGID Compositional architectures for UGID Multimodal learning for UGID Describing novel objects in UGID Natural language processing model New datasets for UGID Novel encoder-decoder based architecture Deep cross-media neural model with applications of UGID, e.g., early childhood education, medical image analysis, assisted blinding and news automation, etc.
Important Dates:

Paper submission due: Oct 20, 2020

First notification: Dec 20, 2020

Final decision made on all manuscripts: Mar 30, 2021

Managing Guest Editor:

Prof. Zhao Zhang, Hefei University of Technology, China

Other Guest Editors:

Dr. Sheng Li, University of Georgia, USA

Prof. Meng Wang, Hefei University of Technology, China

Prof. Shuicheng Yan, National University of Singapore, Singapore

Related Resources

IEEE--ICIVC--Ei Compendex, Scopus 2021   IEEE--2021 6th International Conference on Image, Vision and Computing (ICIVC 2021)--Ei Compendex, Scopus
ICIAP 2021   ICIAP2021 -21th International Conference on Image Analysis and Processing
SDLDIP 2021   Special Issue on Sensors and Deep Learning for Digital Image Processing
SCOPUS-PRIS 2021   3rd International Conference on Pattern Recognition and Intelligent Systems (PRIS 2021)
ICIVC--IEEE, Ei, Scopus 2021   IEEE--2021 6th International Conference on Image, Vision and Computing (ICIVC 2021)--Ei Compendex, Scopus
ITAS--EI, Scopus 2021   2021 Information Technology & Applications Symposium (ITAS 2021)--EI Compendex, Scopus
ASIP--Ei Compendex and Scopus 2021   ACM--2021 3rd Asia Symposium on Image Processing (ASIP 2021)--Ei Compendex, Scopus
SSVM 2021   SSVM 2021 : Eighth International Conference on Scale Space and Variational Methods in Computer Vision
ICRMV--EI, Scopus 2021   2021 The 5th International Conference on Robotics and Machine Vision (ICRMV 2021)--Ei Compendex, Scopus
ICCV 2021   International Conference on Computer Vision