Few-shot Image Generation via Cross-domain Correspondence
CVPR 2021
Utkarsh Ojha    Yijun Li    Jingwan Lu    Alexei A. Efros    Yong Jae Lee    Eli Shechtman    Richard Zhang


Training generative models, such as GANs, on a target domain containing limited examples (e.g., 10) can easily result in overfitting. In this work, we seek to utilize a large source domain for pretraining and transfer the diversity information from source to target. We propose to preserve the relative similarities and differences between instances in the source via a novel cross-domain distance consistency loss. To further reduce overfitting, we present an anchor-based strategy to encourage different levels of realism over different regions in the latent space. With extensive results in both photorealistic and non-photorealistic domains, we demonstrate qualitatively and quantitatively that our few-shot model automatically discovers correspondences between source and target domains and generates more diverse and realistic images than previous methods.



Natural face to other artistic-face domains

We show the results of adapting a source GAN trained on natural faces (FFHQ) to different target domains using just 10 examples from each of them.

More examples on related source/target domains

The same process is done with other pairs of source/target domains, again, with only 10 training examples.

Unrelated source and target domains

In this case, the method cannot model the accurate distribution defined by few target images. However, it still discovers some interesting part-level correspondences.

Comparison with other methods

All the methods are some form of adaptation techniques which start with the same source model (for FFHQ) to adapt to target domains with 10 examples.

Effect of the training data size


U. Ojha, Y. Li, C. Lu, A. A. Efros,
Y. J. Lee, E. Shechtman, R. Zhang.
Few-shot Image Generation via Cross-domain Correspondence
CVPR, 2021.
(hosted on ArXiv)



This template was originally made by Phillip Isola and Richard Zhang for a colorful ECCV project; the code can be found here.