Dynamically Typed

Not-so-BigGAN

When I covered BigGAN in February 2019 (DT #6), its image generation results were very impressive — but the model was also incredibly expensive to train, requiring a cluster of hundreds of TPUs. Now, just a year and a half later, Han et al. (2020) introduced not-so-BigGAN: close-enough image quality trained on just 4 Tesla-V100 GPUs — an order of magnitude less compute. This speed of this progress is amazing.