Abstract:
Gaze correction is a type of video re-synthesis problem that trains to redirect a person's eye gaze into camera
by manipulating the eye area. It has many applications like video conferencing, movies, games and has a great future in
medical fields such as to experiment with people having autism. Existing methods are incapable of gaze redirection of
video using GAN. We suggest an approach based on the in-painting model to read from the face and fill the missed eye
regions with new contents, reflecting corrected eye gaze in this paper. Here we have implemented both gaze estimation
as well as gaze redirection. We used the hourglass model of CNN for gaze estimation and the Generative Adversarial
Network(GAN) for video gaze redirection, in which two neural networks compete in a game to learn and produce new
data with the same statistics as the training set. In addition, we estimate various losses such as discriminator generator
loss and perceptual loss in order to determine the accuracy of our model and evaluate the performance by adversarial
divergence, reconstruction error and image quality measures. We demonstrate that the proposed method outperforms in
terms of quality of the image and redirection precision in comprehensive tests.