posted by system || 1682 views

PIES-ME 2022 : The 1st Workshop on Photorealistic Image and Environment Synthesis for Multimedia Experiments (PIES-ME)


When Oct 10, 2022 - Oct 14, 2022
Where Lisbon, Portugal
Submission Deadline Jul 22, 2022
Notification Due Aug 7, 2022
Final Version Due Aug 21, 2022

Call For Papers


The 1st Workshop on Photorealistic Image and Environment Synthesis for Multimedia Experiments (PIES-ME)

Lisbon, Portugal
October 10-14, 2022

Co-located with ACM Multimedia 2022


Paper Submission: July 22, 2022
Notification of Acceptance: August 7, 2022
Camera-ready version: August 21, 2022


Photorealistic media aim to faithfully represent the world, creating an experience which is perceptually indistinguishable from a real world experience. Current standard media applications fall short of this goal since acquisition and production technologies in the consumer applications do not capture/produce enough of the world’s visual, audio, spatial, and temporal information to faithfully represent it. In the last years, however, the area of photorealistic media has seen a lot of activity, with new multimedia areas emerging, such as light fields, point clouds, ultra-high definition, high frame rate, high dynamic range imaging, and novel 3D audio and sound field technologies. The combination of these technologies can certainly contribute to pave the way for hyper-realistic media experience. But first, we have to overcome several technological challenges. It is worth pointing out that research in this area requires the use of big datasets, software tools, and powerful infrastructures. Among these, the availability of meaningful datasets, with a diverse and high-quality content, is of significant importance.

The goal of this workshop is to engage experts and researchers on the synthesis of photorealistic images and/or virtual environments, particularly in the form of public datasets, software tools, or infrastructures, for multimedia research. Such public datasets, software tools, and infrastructures will lower entry barriers by enabling researchers that lack expensive hardware (e.g., complex camera systems, smart glasses, robots, autonomous vehicles) to simulate and create datasets representative of such hardware and various scenarios. Photorealistic image and environment synthesis can benefit multiple research areas in addition to multimedia systems, such as machine learning, robotics, computer vision, mixed reality, and virtual reality.


We welcome novel and unpublished research contributions on the following topics (but not limited to):

- Characteristics of indoor and outdoor environments for multimedia research
- Hardware and software tools for generation/alteration/customization of photorealistic virtual environments
- Camera motion and video editing techniques
- Standardization of video resolution and content for dataset comparison and aggregation
- Objective and subjective QoE metrics for multimedia streaming and consumption
- Standardized human subjects’ data collection in multimedia streaming experiments
- Dataset curation and dissemination platforms and tools


Ravi Prakash, University of Texas at Dallas, USA
Mylene Farias. University of Brasília, Brazil
Ryan P. McMahan. University of Central Florida, USA
Marcelo M. Carvalho. University of Brasília, Brazil


Federica Battisti, University of Padova, Italy
Marco Carli, Università degli Studi Roma TRE, Italy
Jiasi Chen, University of California Riverside, USA
Aladine Chetouani, Université d'Orléans, France
Frederic Dufaux, CNRS, Centrale Supelec, Université Paris-Saclay, France
Jesús Gutiérrez, Universidad Politécnica de Madrid, Spain
Chaker Larabi, Université de Poitiers, France
Pramit Mazumdar, Indian Institute of Information Technology Vadodara (IIIT-V), India
Laura Toni, University College London, UK
Giuseppe Valenzise, CNRS, CentraleSupelec, Université Paris-Saclay, France
Toinon Vigier, Université de Nantes, France
Zhisheng Yan, George Mason University, USA
Michael Zink, University of Massachusetts, Amherst, USA


Henrique D. Garcia, University of Brasília, Brazil
Gabriel C. Araújo, University of Brasília, Brazil


Authors are invited to submit original, unpublished research papers that are not currently under review elsewhere. All submissions should be written in English limited to six (6) pages of content, including all figures, tables, and references, plus up to one (1) page of reference in PDF format. Authors must use the ACM Article Template. Please use the template in traditional double-column format to prepare your submissions. The word users may use Word Interim Template, and latex users may use sample-sigconf template. All submissions will undergo blind review.

Related Resources

ATIP 2023   2023 Aisa Conference on Trends in Image Processing (ATIP 2023)
ICSIP 2023   IEEE--2023 8th International Conference on Signal and Image Processing (ICSIP 2023)
IWIP--EI 2023   2023 3rd International Workshop on Image Processing (IWIP 2023)
ASIP 2023   2023 5th Asia Symposium on Image Processing (ASIP 2023)
AFAPER 2022   IET Image Processing special issue on 'Advancements in Fine Art Pattern Extraction and Recognition'
ICMSSP--Ei 2023   2023 8th International Conference on Multimedia Systems and Signal Processing (ICMSSP 2023)
ICMIP--Ei and Scopus 2023   2023 8th International Conference on Multimedia and Image Processing (ICMIP 2023)--Ei Compendex, Scopus
ICMIP 2023   2023 8th International Conference on Multimedia and Image Processing (ICMIP 2023)
ME 2022   7th International Conference on Mechanical Engineering
WoWMoM 2023   24th IEEE International Symposium on a World of Wireless, Mobile and Multimedia Networks