Skip to main content

FairStyle: Debiasing StyleGAN2 with Style Channel Manipulations

  • Conference paper
  • First Online:
Computer Vision – ECCV 2022 (ECCV 2022)

Abstract

Recent advances in generative adversarial networks have shown that it is possible to generate high-resolution and hyperrealistic images. However, the images produced by GANs are only as fair and representative as the datasets on which they are trained. In this paper, we propose a method for directly modifying a pre-trained StyleGAN2 model that can be used to generate a balanced set of images with respect to one (e.g., eyeglasses) or more attributes (e.g., gender and eyeglasses). Our method takes advantage of the style space of the StyleGAN2 model to perform disentangled control of the target attributes to be debiased. Our method does not require training additional models and directly debiases the GAN model, paving the way for its use in various downstream applications. Our experiments show that our method successfully debiases the GAN model within a few minutes without compromising the quality of the generated images. To promote fair generative models, we share the code and debiased models at http://catlab-team.github.io/fairstyle.

C. E. Karakas and A. Dirik—Equal contributions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
eBook
USD 89.00
Price excludes VAT (USA)
Softcover Book
USD 119.99
Price excludes VAT (USA)

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://github.com/NVlabs/stylegan2.

  2. 2.

    https://github.com/genforce/fairgen.

  3. 3.

    https://github.com/RameenAbdal/StyleFlow.

References

  1. Abdal, R., Zhu, P., Mitra, N.J., Wonka, P.: StyleFlow: attribute-conditioned exploration of StyleGan-generated images using conditional continuous normalizing flows. arXiv preprint arXiv:2008.02401 (2021)

  2. Agarwal, A., Beygelzimer, A., Dudík, M., Langford, J., Wallach, H.M.: A reductions approach to fair classification. arXiv preprint arXiv:1803.02453 (2018)

  3. Azadi, S., Olsson, C., Darrell, T., Goodfellow, I.J., Odena, A.: Discriminator rejection sampling. arXiv preprint arXiv:1810.06758 (2019)

  4. Bau, D., Liu, S., Wang, T., Zhu, J.Y., Torralba, A.: Rewriting a deep generative model. arXiv preprint arXiv:2007.15646 (2020)

  5. Brock, A., Donahue, J., Simonyan, K.: Large scale GAN training for high fidelity natural image synthesis. CoRR abs/1809.11096, arXiv preprint arXiv:1809.11096 (2018)

  6. Buolamwini, J., Gebru, T.: Gender shades: intersectional accuracy disparities in commercial gender classification. In: FAT (2018)

    Google Scholar 

  7. Feldman, M.: Computational fairness: preventing machine-learned discrimination. Ph.D. thesis, Haverford College (2015)

    Google Scholar 

  8. Goetschalckx, L., Andonian, A., Oliva, A., Isola, P.: GANalyze: toward visual definitions of cognitive image properties. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5744–5753 (2019)

    Google Scholar 

  9. Goodfellow, I., et al.: Generative adversarial nets. In: Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N.D., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems, vol. 27, pp. 2672–2680. Curran Associates, Inc. (2014). https://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf

  10. Grover, A., Choi, K., Shu, R., Ermon, S.: Fair generative modeling via weak supervision. In: ICML (2020)

    Google Scholar 

  11. Grover, A., et al.: Bias correction of learned generative models using likelihood-free importance weighting. In: DGS@ICLR (2019)

    Google Scholar 

  12. Hardt, M., Price, E., Srebro, N.: Equality of opportunity in supervised learning. In: NIPS (2016)

    Google Scholar 

  13. Härkönen, E., Hertzmann, A., Lehtinen, J., Paris, S.: GANSpace: discovering interpretable GAN controls. arXiv preprint arXiv:2004.02546 (2020)

  14. Jahanian, A., Chai, L., Isola, P.: On the steerability of generative adversarial networks. arXiv preprint arXiv:1907.07171 (2019)

  15. Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. CoRR abs/1812.04948. arXiv preprint arxiv:1812.04948 (2018)

  16. Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of StyleGAN. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8107–8116 (2020)

    Google Scholar 

  17. Kocasari, U., Dirik, A., Tiftikci, M., Yanardag, P.: StyleMC: multi-channel based fast text-guided image generation and manipulation. In: 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pp. 3441–3450 (2022)

    Google Scholar 

  18. Lang, O., et al.: Explaining in style: training a GAN to explain a classifier in stylespace. arXiv preprint arxiv:2104.13369 (2021)

  19. Li, S., et al.: Single image deraining: a comprehensive benchmark analysis (2019)

    Google Scholar 

  20. Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: 2015 IEEE International Conference on Computer Vision (ICCV), pp. 3730–3738 (2015)

    Google Scholar 

  21. Louizos, C., Swersky, K., Li, Y., Welling, M., Zemel, R.S.: The variational fair autoencoder. In: CoRR abs/1511.00830 (2016)

    Google Scholar 

  22. McDuff, D., Ma, S., Song, Y., Kapoor, A.: Characterizing bias in classifiers using generative models. arXiv preprint arXiv:1906.11891 (2019)

  23. Oneto, L., Chiappa, S.: Fairness in machine learning. arXiv preprint arXiv:2012.15816 (2020)

  24. Patashnik, O., Wu, Z., Shechtman, E., Cohen-Or, D., Lischinski, D.: StyleCLIP: text-driven manipulation of styleGAN imagery. arXiv preprint arXiv:2103.17249 (2021)

  25. Radford, A., et al.: Learning transferable visual models from natural language supervision. arXiv preprint arXiv:2103.00020 (2021)

  26. Ramaswamy, V.V., Kim, S.S.Y., Russakovsky, O.: Fair attribute classification through latent space de-biasing. In: 21 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9297–9306 (2021)

    Google Scholar 

  27. Shen, Y., Yang, C., Tang, X., Zhou, B.: InterFaceGAN: interpreting the disentangled face representation learned by GANS. In: Transactions on Pattern Analysis and Machine Intelligence (2020)

    Google Scholar 

  28. Shen, Y., Zhou, B.: Closed-form factorization of latent semantics in GANs. arXiv preprint arXiv:2007.06600 (2020)

  29. Sun, W., Chen, Z.: Learned image downscaling for upscaling using content adaptive resampler. IEEE Trans. Image Process. 29, 4027–4040 (2020). https://doi.org/10.1109/tip.2020.2970248

    Article  MATH  Google Scholar 

  30. Tan, S., Shen, Y., Zhou, B.: Improving the fairness of deep generative models without retraining. arXiv preprint arXiv:2012.04842 2020)

  31. Tanaka, A.: Discriminator optimal transport. In: NeurIPS (2019)

    Google Scholar 

  32. Tanielian, U., Issenhuth, T., Dohmatob, E., Mary, J.: Learning disconnected manifolds: a no GANs land. arXiv preprint arXiv:2006.04596 2020)

  33. Voynov, A., Babenko, A.: Unsupervised discovery of interpretable directions in the GAN latent space. In: International Conference on Machine Learning, pp. 9786–9796. PMLR (2020)

    Google Scholar 

  34. Wang, T., Yang, X., Xu, K., Chen, S., Zhang, Q., Lau, R.: Spatial attentive single-image deraining with a high quality real rain dataset (2019)

    Google Scholar 

  35. Wang, T.C., Liu, M.Y., Zhu, J.Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional GANs (2017)

    Google Scholar 

  36. Wold, S., Esbensen, K., Geladi, P.: Principal component analysis. Chemom. Intell. Lab. Syst. 2(1–3), 37–52 (1987)

    Article  Google Scholar 

  37. Woodworth, B.E., Gunasekar, S., Ohannessian, M.I., Srebro, N.: Learning non-discriminatory predictors. arXiv preprint arXiv:1702.06081 (2017)

  38. Wu, Z., Lischinski, D., Shechtman, E.: StyleSpace analysis: disentangled controls for styleGAN image generation. arXiv preprint arXiv:2011.12799 (2020)

  39. Yüksel, O.K., Simsar, E., Er, E.G., Yanardag, P.: LatentCLR: a contrastive learning approach for unsupervised discovery of interpretable directions. arXiv preprint arXiv:2104.00820 (2021)

  40. Zafar, M.B., Valera, I., Gomez-Rodriguez, M., Gummadi, K.P.: Fairness constraints: mechanisms for fair classification. In: AISTATS (2017)

    Google Scholar 

  41. Zemel, R.S., Wu, L.Y., Swersky, K., Pitassi, T., Dwork, C.: Learning fair representations. In: ICML (2013)

    Google Scholar 

  42. Zhang, H., et al.: StackGAN++: realistic image synthesis with stacked generative adversarial networks. CoRR abs/1710.10916, arXiv preprint arXiv:1710.10916 (2017)

  43. Zhu, J., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. CoRR abs/1703.10593, arXiv preprint arXiv:1703.10593 (2017)

Download references

Acknowledgments

This publication has been produced benefiting from the 2232 International Fellowship for Outstanding Researchers Program of TUBITAK (Project No: 118c321).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alara Dirik .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 5242 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Karakas, C.E., Dirik, A., Yalçınkaya, E., Yanardag, P. (2022). FairStyle: Debiasing StyleGAN2 with Style Channel Manipulations. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13673. Springer, Cham. https://doi.org/10.1007/978-3-031-19778-9_33

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-19778-9_33

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-19777-2

  • Online ISBN: 978-3-031-19778-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics