Video Retargeting Enhancement For An Interactive Exhibition

Abstract

The quality of computer-generated images and videos has progressed with technology to a point where it is difficult to tell the difference between what is real and what is not. An example is given by the video retargeting application, which consists in animating a target human subject with any desired motion. In recent years, several video retargeting methods has been developed. However, the majority of these methods share the big limitation of producing low-resolution results. To solve this issue, in this thesis, we propose a new framework called One-Shot- Video-Enhance, that allows to maintain high-resolution in synthesized retargeted videos, by transferring high-frequency details belonging to the target subject that otherwise would be lost. Compared to other deep quality-restoration approaches, our model not only proves to be faster, but also to outperform them in terms of details recovering, color and temporal consistency. In addition to that, One-Shot-Video-Enhance produces results that are extremely faithful to the target appearance, and this contributes to a realistic aspect of the subject in the animated video. Foremost, our model does not require any additional data than the one already needed for the synthesis of the retargeted video.


Alessia Paccagnella

Master's Thesis

Status:

Completed

JavaScript has been disabled in your browser