Parallel/distributed GAN training by using MPI at IPDPS2020/PDCO2020

We present an HPCS version of Lipizzaner/Mustangs at PDCO2020 workshop of IPDPS 2020.

Parallel/distributed GAN training by using MPI at IPDPS2020/PDCO2020

In order to allow the use of our distributed coevolutionary GAN training method, we have developed an MPI version of Lipizzaner/Mustangs, which is presented at IPDPS2020/PDCO2020.


This year IPDPS 2020 activities are conducted in a virtual environment. Thus, we are not going to New Orleans to present our work entitled Parallel/distributed implementation of cellular training for generative adversarial neural networks. The main idea behind this study is to propose a distributed memory parallel implementation of our coevolutionar GAN training framework (Lipizzaner/Mustangs) for execution in high performance/supercomputing centers. The results show that the proposed implementation is able to reduce the training times and scale properly.

  • The paper can be seen here
  • The presentation can be seen here

Parallel/distributed implementation of cellular training for generative adversarial neural networks

Abstract

Generative adversarial networks (GANs) are widely used to learn generative models. GANs consist of two networks, a generator and a discriminator, that apply adversarial learning to optimize their parameters. This article presents a parallel/distributed implementation of a cellular competitive coevolutionary method to train two populations of GANs. A distributed memory parallel implementation is proposed for execution in high performance/supercomputing centers. Efficient results are reported on addressing the generation of handwritten digits (MNIST dataset samples). Moreover, the proposed implementation is able to reduce the training times and scale properly when considering different grid sizes for training.

PDCO 2020 slide

PUBLICATIONS
publication pdco jamal gan lipizzaner mustangs