You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the past, GANs needed a lot of data to learn how to generate well. The faces model took **70k** high quality images from Flickr, as an example.
85
+
86
+
However, in the month of May 2020, researchers all across the world independently converged on a simple technique to reduce that number to as low as **1-2k**. That simple idea was to differentiably augment all images, generated or real, going into the discriminator during training.
87
+
88
+
If one were to augment at a low enough probability, the augmentations will not 'leak' into the generations.
89
+
90
+
In the setting of low data, you can use the feature with a simple flag.
91
+
92
+
```bash
93
+
# find a suitable probability between 0. -> 0.7 at maximum
94
+
$ stylegan2_pytorch --data ./data --aug-prob 0.25
95
+
```
96
+
97
+
## Attention
83
98
84
99
This framework also allows for you to add an efficient form of self-attention to the designated layers of the discriminator (and the symmetric layer of the generator), which will greatly improve results. The more attention you can afford, the better!
85
100
@@ -277,4 +292,15 @@ Thank you to Matthew Mann for his inspiring [simple port](https://github.com/man
277
292
eprint = {2006.02595},
278
293
archivePrefix = {arXiv}
279
294
}
295
+
```
296
+
297
+
```bibtex
298
+
@misc{karras2020training,
299
+
title = {Training Generative Adversarial Networks with Limited Data},
300
+
author = {Tero Karras and Miika Aittala and Janne Hellsten and Samuli Laine and Jaakko Lehtinen and Timo Aila},
0 commit comments