WICED 2015 : Eurographics Workshop on Intelligent Cinematography and Editing
Call For Papers
Call for papers: Eurographics Workshop on Intelligent Cinematography and Editing (WICED 2015),
Zurich, May 4, 2015. Co-located with the 36th annual conference of the European Association of Computer Graphics.
Deadline for submission : February 28, 2015.
The expressive use of virtual cameras, mise-en-scene, lighting and editing (montage) within 3D synthetic environment shows great promise to extend the communicative power of film and video into the artificial environments of games and virtual worlds.
Cinematics produced in virtual worlds play a role not just for entertainment, but also for training, education, health-care communication, simulation, visualization and many other contexts. The automatic creation of cinematics in these environments holds the potential to produce video sequences appropriate for a wide range of applications and tailored to specific spatial, temporal, communicative, user and application contexts.
At the same time, recent advances in computer vision-based object, actor and action recognition make it possible to envision novel re-cinematography (re-lighting, re-framing) and automatic editing of live-action video. This fourth workshop on intelligent cinematography and editing is intended to bridge the gap between the two areas and confront research being performed in both domains. One common area of active research is the representation and understanding of the story to be told and its relation to teaching, training or therapeutic goals.
The workshop is open to researchers and industrial experts working on the many related aspects of digital cinematography and film editing in their respective fields, including 3D graphics, artificial intelligence, computer vision, visualisation, interactive narrative, cognitive and perceptual psychology, computational linguistics, computational aesthetics and visual effects.
These researchers will draw upon cutting edge research and technologies regarding both the production and comprehension of cinematographic artworks in virtual worlds and the real world.
Topics of interest
Camera path planning
Navigation techniques and proximal exploration
Interactive camera control metaphors
Approaches to framing and composition of individual shots
Automatic lighting design
Intelligent staging and blocking of virtual lights, cameras and actors
Expressive performance of virtual characters
Intelligent video editing tools
Efficient algorithms for camera placement and shot sequence selection
Natural user interfaces for camera control and video editing
Parallels between cinematic and linguistic communication
Cognitive models of the comprehension of virtual cinematics
Re-cinematography, re-lighting and re-framing of live-action video
Computer-assisted multi-camera production
Virtual cinematography as a pre-visualisation tool for real-world filming
Intelligent tools and novel interfaces for in-game cinematics, replays, and machinima
Intelligent generation of comic book layouts
Evaluation methodologies and user experience
Interactive and generative cinema
Cinematic serious game and applications
Collaborative visual storytelling
Creativity in cinematic communication
Researchers should submit one of:
• 8 page paper reporting new work or new ideas in a relevant research area.
• 1 page abstract describing emergent work or a vision of the near term future of intelligent cinematography.
Submissions will be reviewed by the program committee; review criteria will intentionally be inclusive rather than competitive, to encourage work in progress and a broader participation in a nascent community of researchers. Reviews will include feedback for authors regardless of decision for acceptance or rejection. All the selected papers (excluding 1 page abstracts) will be published in the workshop’s working notes.
Proceedings of the workshop will be published by EG Publishing provided in the EG Digital Library.
Please refer to the workshop web site : http://wiced2015.inria.fr for submission information and updates.