Last week We GECCO 2019 was held in Prague and my colleagues from ALFA went there to present some of our exiting research about evolutionary computation.
A paper about our Mustangs framework, which is one of the pillars on which NeCOL is based, was accepted as full paper to be presented in this event. Mustangs is about fostering diversity when training generative adversarial networks applying a coevolutionary method to improve the training process.
The paper is entitled Spatial Evolutionary Generative Adversarial Networks and it can be downloaded from here.
Erik Hemberg was in charge to present this work to the audience, that showed a lot of interest during the session. We had very fruitful feedback, which would open new and excited research lines.
The presentation can be downloaded from here
As we are working in this interesting project for Deep Learning community, we will be able to report new, stimulating, and remarkable results in short period of time.
Generative adversary networks (GANs) suffer from training pathologies such as instability and mode collapse. These pathologies mainly arise from a lack of diversity in their adversarial interactions. Evolutionary generative adversarial networks apply the principles of evolutionary computation to mitigate these problems. We hybridize two of these approaches that promote training diversity. One, E-GAN, at each batch, injects mutation diversity by training the (replicated) generator with three independent objective functions then selecting the resulting best performing generator for the next batch. The other, Lipizzaner, injects population diversity by training a two-dimensional grid of GANs with a distributed evolutionary algorithm that includes neighbor exchanges of additional training adversaries, performance based selection and population-based hyper-parameter tuning. We propose to combine mutation and population approaches to diversity improvement. We contribute a superior evolutionary GANs training method, Mustangs, that eliminates the single loss function used across Lipizzaner ’s grid. Instead, each training round, a loss function is selected with equiprobability, from among the three E-GAN uses. Experimental analyses on a standard benchmark, MNIST, demonstrate that Mustangs provides a statistically faster training method resulting in more accurate networks.
Jamal Toutouh PUBLICATIONS
publication gecco jamal gan