Learning single-image 3D reconstruction by generative modelling of shape, pose and shading

Henderson PM, Ferrari V. 2020. Learning single-image 3D reconstruction by generative modelling of shape, pose and shading. International Journal of Computer Vision. 128, 835–854.

Download
OA 2019_CompVision_Henderson.pdf 2.24 MB

Journal Article | Published | English

Scopus indexed
Author
Henderson, Paul MISTA ; Ferrari, Vittorio
Department
Abstract
We present a unified framework tackling two problems: class-specific 3D reconstruction from a single image, and generation of new 3D shape samples. These tasks have received considerable attention recently; however, most existing approaches rely on 3D supervision, annotation of 2D images with keypoints or poses, and/or training with multiple views of each object instance. Our framework is very general: it can be trained in similar settings to existing approaches, while also supporting weaker supervision. Importantly, it can be trained purely from 2D images, without pose annotations, and with only a single view per instance. We employ meshes as an output representation, instead of voxels used in most prior work. This allows us to reason over lighting parameters and exploit shading information during training, which previous 2D-supervised methods cannot. Thus, our method can learn to generate and reconstruct concave object classes. We evaluate our approach in various settings, showing that: (i) it learns to disentangle shape from pose and lighting; (ii) using shading in the loss improves performance compared to just silhouettes; (iii) when using a standard single white light, our model outperforms state-of-the-art 2D-supervised methods, both with and without pose supervision, thanks to exploiting shading cues; (iv) performance improves further when using multiple coloured lights, even approaching that of state-of-the-art 3D-supervised methods; (v) shapes produced by our model capture smooth surfaces and fine details better than voxel-based approaches; and (vi) our approach supports concave classes such as bathtubs and sofas, which methods based on silhouettes cannot learn.
Publishing Year
Date Published
2020-04-01
Journal Title
International Journal of Computer Vision
Acknowledgement
Open access funding provided by Institute of Science and Technology (IST Austria).
Volume
128
Page
835-854
ISSN
eISSN
IST-REx-ID

Cite this

Henderson PM, Ferrari V. Learning single-image 3D reconstruction by generative modelling of shape, pose and shading. International Journal of Computer Vision. 2020;128:835-854. doi:10.1007/s11263-019-01219-8
Henderson, P. M., & Ferrari, V. (2020). Learning single-image 3D reconstruction by generative modelling of shape, pose and shading. International Journal of Computer Vision. Springer Nature. https://doi.org/10.1007/s11263-019-01219-8
Henderson, Paul M, and Vittorio Ferrari. “Learning Single-Image 3D Reconstruction by Generative Modelling of Shape, Pose and Shading.” International Journal of Computer Vision. Springer Nature, 2020. https://doi.org/10.1007/s11263-019-01219-8.
P. M. Henderson and V. Ferrari, “Learning single-image 3D reconstruction by generative modelling of shape, pose and shading,” International Journal of Computer Vision, vol. 128. Springer Nature, pp. 835–854, 2020.
Henderson PM, Ferrari V. 2020. Learning single-image 3D reconstruction by generative modelling of shape, pose and shading. International Journal of Computer Vision. 128, 835–854.
Henderson, Paul M., and Vittorio Ferrari. “Learning Single-Image 3D Reconstruction by Generative Modelling of Shape, Pose and Shading.” International Journal of Computer Vision, vol. 128, Springer Nature, 2020, pp. 835–54, doi:10.1007/s11263-019-01219-8.
All files available under the following license(s):
Creative Commons Attribution 4.0 International Public License (CC-BY 4.0):
Main File(s)
Access Level
OA Open Access
Date Uploaded
2019-10-25
MD5 Checksum
a0f05dd4f5f64e4f713d8d9d4b5b1e3f


Export

Marked Publications

Open Data ISTA Research Explorer

Web of Science

View record in Web of Science®

Sources

arXiv 1901.06447

Search this title in

Google Scholar