A short story about GANs at Tooploox

  • Scope:
  • Artificial Intelligence

And a long way to the NIPS conference.

A short introduction to generative models

In recent years, a type of Machine Learning model known as Generative Adversarial Networks (GAN), has become a hot topic. This is mainly due to their capability of generating good-looking and convincing artificially images. One example can be seen below, generated by the PG-GAN that was published in [1].

Source: https://github.com/tkarras/progressive_growing_of_gans

Do you recognize these celebrities? No? They are fake celebrities generated by a machine learning model!!

The idea of a GAN was initially proposed by Ian Goodfellow in [2]. It is based on Game Theory and involves training two competing neural networks: the generator network G and the discriminator network D. The goal of GANs is to train a generator G to sample from the data distribution by transforming the vector of noise z. The discriminator D is trained to distinguish samples generated by G from samples generated from the data distribution (i.e. images of celebrities in the above example). The architecture of the model is presented below.

Practically, the training process can be explained as a duel game, where:

  • Generator G(z) tries to fool the discriminator by generating real-looking images,
  • Discriminator D(x) tries to distinguish between real and fake images.

GAN models are widely used for generating artificial images, but there are plenty of other applications where they can be applied: semi-supervised classification, image retrieval style search and many others. It is also worth to mention, that the generated picture by GAN, created by students was recently sold for 
[4] Zięba et al. “BinGAN: Learning Compact Binary Descriptors with a Regularized GAN” NIPS, 2018.
[5] Lowe, David G. “Distinctive image features from scale-invariant keypoints.” International journal of computer vision 60.2 (2004): 91-110.
[6]. Rublee, Ethan, et al. “ORB: An efficient alternative to SIFT or SURF.” Computer Vision (ICCV), 2011 IEEE international conference on. IEEE, 2011.
[7] A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In ICLR, 2016.
[8] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved techniques for training gans. In NIPS, 2016.
[9]. Lin, Yuanqing, et al. “Imagenet classification: fast descriptor coding and large-scale SVM training.” Large scale visual recognition challenge (2010).
[10]. Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. “Imagenet classification with deep convolutional neural networks.” Advances in neural information processing systems. 2012.
[11] Y. Cao, G. W. Ding, K. Y.-C. Lui, and R. Huang. Improving GAN training via binarized representation entropy (BRE) regularization. In ICLR, 2018.

Read also about Augmenting AI image recognition with partial evidence

Latest posts

See all posts