fbpx

Automated Generation of Realistic Human Faces Using GANs

By Orisys Academy on 24th January 2024

Problem Statement

Generating realistic human faces for applications like video games, virtual reality, or simulations can be challenging. Generative Adversarial Networks (GANs) offer a promising approach for creating high-quality synthetic images.

Abstract

This project aims to implement GANs for the automated generation of realistic human
faces. The system will train on a dataset of real faces and generate new, synthetic faces
that closely resemble real ones. The focus is on achieving high visual fidelity and
diversity in the generated faces.

Outcome

A GAN-based model capable of generating diverse and realistic human faces, suitable
for applications in entertainment, simulations, or any context requiring synthetic
human-like images.

Reference

Graphics algorithms for high quality image rendering are highly involved process, as layout, components, and light transport must be explicitly simulated. While existing algorithms excel in this task, creating and formatting virtual environments is a costly and time-consuming process. Thus, there is an opportunity for automating this labor intensive process by leveraging recent development in computer vision. Recent development in deep generative models, especially GANs, has spurred much interest in the computer vision domain for synthesizing realistic images. GANs combine backpropagation with a competitive process involving a pair networks, called Generative Network G and Discriminative Network D, in which G generate artificial images and D classifies it into real or artificial image categories. As the training proceeds, G learns to generate realistic images to confuse D [1]. In this work, a convolutional architecture based on GAN, specifically Deep Convolutional Generative Adversarial Networks (DCGAN) has been implemented to train a generative model that can produce good quality images of human faces at scale. CelebFaces Attributes Dataset (CelebA) has been used to train the DCGAN model. Structural Similarity Index (SSIM), that measures the structural and contextual similarity of two images, has been used for quantitative evaluation of the trained DCGAN model. Obtained results shows that the quality of generated images is quite similar to the high quality images of the CelebA dataset.

  1. Ian. J. Goodfellow et al., “Generative Adversarial Networks,” Proceedings of the Neural Information Processing Systems, pp. 248-260, 2014.
  2. Ian. Goodfellow, “NIPS Tutorial: Generative Adversarial Networks,” 2017.
  3. Han. Zhang et al., “Self-Attention Generative Adversarial Networks,” CVPR, 2018.
  4. Alexey. Kurakin et al., “Adversarial examples in the physical world,” CVPR, 2017.
  5. Radford Alec et al., “Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks,” CVPR, 2016.

    https://ieeexplore.ieee.org/document/9616779/references#references