Virtual Exposure Control for Creative Image and Video Editing
Author | : Nestor ZILIOTTO SALAMON |
Promotor(s) | : Prof.dr. E. Eisemann |
University | : Delft University of Technology |
Year of publication | : 2021 |
Link to repository | : TU Delft Research Repository |
Abstract
Postprocessing has become a major component in movie and image production. This step is no longer a simple cleanup and cutting step, but it involves important manipulations that contribute to the atmosphere of a movie and the perception of a still image. Several movie studios spend a great part of their budget on it, as managing the postprocessing parameters is a cumbersome task, requiring costly and specialized tools and skills. For photography, while several software packages provide automatic adjustments and filters, the fine grained editing is difficult to achieve for novice users. The reason for postprocessing is that many parameters are difficult to set correctly during the actual capture of the scene. An example is exposure time. Imagine you are in a car race and want to register that moment. To convey a sense of motion in your photograph, you adjust the camera exposure time: not too short to freeze all cars, nor too long to blur the image completely. To find the threshold, other camera parameters such as aperture and sensor sensitivity must be taken into account. Even the speed of the cars needs to be considered. A much more suitable solution would be to adjust the motion blur after the acquisition. Nevertheless, this is not a simple task. Typically, it requires skill and involves manipulating the image by hand, which is time consuming and highly prone to artifacts. For videos, such edits are even more complex as the spatiotemporal coherence must be observed, especially when temporal warping occurs. In this dissertation, we present efficient solutions for exposure control in postproduction to enable high quality visual content generation. Next to imagemanipulation algorithms, we explore acquisition based solutions and intuitive interaction metaphors to support expressive content production. Our outcome is not only intended for professionals to reach their design visions regarding atmosphere and storytelling, but also includes semiautomatic approaches to enable novice users to achieve impactful and realistic images. Consequently, the presented results have the potential of inspiring new artists, while the methods described can also be employed to simplify complex visual content creation tasks.