Replication study of "Explaining in Style: Training a GAN to explain a classifier in StyleSpace"Download PDF

Published: 11 Apr 2022, Last Modified: 05 May 2023RC2021Readers: Everyone
Keywords: Counterfactuals, StylEx, StyleGAN2, GANs, XAI
Abstract: Scope of Reproducibility In this report claims made in the paper "Explaining in Style: Training a GAN to explain a classifier in StyleSpace" will be tested. This paper claims that by creating a generative model based on pre-trained classifier it is possible to discover and visually explain the underlying attributes that influence the classifier output which can lead to counterfactual explanations. From this it can be deduced what classifiers are learning. Methodology To reproduce the StylEx architecture that has been proposed an already existing implementation of the styleGAN2 model is modified. To implement the AttFind algorithm found in the paper the original TensorFlow code has been converted in to PyTorch code. Furthermore due to the restraint of only having access to one GPU the image resolution has been down scaled to 64x64 pixels such that computation time will not be to extensive. Results A model is created for both dog-cat and age classification. The models performed worse than stated than the pretrained models, most likely due to some issues with the StylEx style space, as AttFind performed well on the StyleGAN2 model. Due to the limitations in the adaptation it is not possible to definitively state whether the claims are true or false. What was easy Pretrained models and the AttFind algorithm were available for execution. It was therefore possible to quickly obtain some results given in the original paper by the authors. It gave a good baseline of what to expect should everything run correctly. What was difficult No training code or information on the training procedure was available publicly, meaning it had to be created from scratch. Although the AttFind algorithm was available, it was in TensorFlow and not PyTorch therefore this needed to be converted. Implementing and training everything ended up taking a lot of time and resources, causing a hyperparameter search and further research not to be possible. Communication with original authors We have had contact with the original authors and many of our questions about their paper were answered. Response time was fast as well, usually taking no longer than 40 hours.
Paper Url: https://arxiv.org/abs/2104.13369
Paper Venue: ICCV 2021
Supplementary Material: zip
4 Replies

Loading