Generating Furry Cars: Disentangling Object Shape and Appearance across Multiple Domains
Utkarsh Ojha
Krishna Kumar Singh
Yong Jae Lee
[Paper]
[GitHub]


Abstract

We consider the novel task of learning disentangled representations of object shape and appearance across multiple domains (e.g., dogs and cars). The goal is to learn a generative model that learns an intermediate distribution, which borrows a subset of properties from each domain, enabling the generation of images that did not exist in any domain exclusively. This challenging problem requires an accurate disentanglement of object shape, appearance, and background from each domain, so that the appearance and shape factors from the two domains can be interchanged. We augment an existing approach that can disentangle factors within a single domain but struggles to do so across domains. Our key technical contribution is to represent object appearance with a differentiable histogram of visual features, and to optimize the generator so that two images with the same latent appearance factor but different latent shape factors produce similar histograms. On multiple multi-domain datasets, we demonstrate our method leads to accurate and consistent appearance and shape transfer across domains.


Method Diagram



Results

[Appearance, Shape -> Output]



Paper and Supplementary Material

U. Ojha, K.K. Singh, Y.J. Lee
Generating Furry Cars: Disentangling Object Shape and Appearance across Multiple Domains
ICLR 2021.
(hosted on OpenReview)


[Bibtex]


Acknowledgements

This template was originally made by Phillip Isola and Richard Zhang for a colorful ECCV project; the code can be found here.