Self-supervised amodal video object segmentation
Yao J, Hong Y, Wang C, Xiao T, He T, Locatello F, Wipf D, Fu Y, Zhang Z. 2022. Self-supervised amodal video object segmentation. 36th Conference on Neural Information Processing Systems. NeurIPS: Neural Information Processing Systems.
Download (ext.)
https://doi.org/10.48550/arXiv.2210.12733
[Preprint]
Conference Paper
| Published
| English
Author
Yao, Jian;
Hong, Yuxin;
Wang, Chiyu;
Xiao, Tianjun;
He, Tong;
Locatello, FrancescoISTA ;
Wipf, David;
Fu, Yanwei;
Zhang, Zheng
Department
Abstract
Amodal perception requires inferring the full shape of an object that is partially occluded. This task is particularly challenging on two levels: (1) it requires more information than what is contained in the instant retina or imaging sensor, (2) it is difficult to obtain enough well-annotated amodal labels for supervision. To this end, this paper develops a new framework of
Self-supervised amodal Video object segmentation (SaVos). Our method efficiently leverages the visual information of video temporal sequences to infer the amodal mask of objects. The key intuition is that the occluded part of an object can be explained away if that part is visible in other frames, possibly deformed as long as the deformation can be reasonably learned.
Accordingly, we derive a novel self-supervised learning paradigm that efficiently utilizes the visible object parts as the supervision to guide the training on videos. In addition to learning type prior to complete masks for known types, SaVos also learns the spatiotemporal prior, which is also useful for the amodal task and could generalize to unseen types. The proposed
framework achieves the state-of-the-art performance on the synthetic amodal segmentation benchmark FISHBOWL and the real world benchmark KINS-Video-Car. Further, it lends itself well to being transferred to novel distributions using test-time adaptation, outperforming existing models even after the transfer to a new distribution.
Publishing Year
Date Published
2022-10-23
Proceedings Title
36th Conference on Neural Information Processing Systems
Conference
NeurIPS: Neural Information Processing Systems
Conference Location
New Orleans, LA, United States
Conference Date
2022-11-28 – 2022-12-01
IST-REx-ID
Cite this
Yao J, Hong Y, Wang C, et al. Self-supervised amodal video object segmentation. In: 36th Conference on Neural Information Processing Systems. ; 2022. doi:10.48550/arXiv.2210.12733
Yao, J., Hong, Y., Wang, C., Xiao, T., He, T., Locatello, F., … Zhang, Z. (2022). Self-supervised amodal video object segmentation. In 36th Conference on Neural Information Processing Systems. New Orleans, LA, United States. https://doi.org/10.48550/arXiv.2210.12733
Yao, Jian, Yuxin Hong, Chiyu Wang, Tianjun Xiao, Tong He, Francesco Locatello, David Wipf, Yanwei Fu, and Zheng Zhang. “Self-Supervised Amodal Video Object Segmentation.” In 36th Conference on Neural Information Processing Systems, 2022. https://doi.org/10.48550/arXiv.2210.12733.
J. Yao et al., “Self-supervised amodal video object segmentation,” in 36th Conference on Neural Information Processing Systems, New Orleans, LA, United States, 2022.
Yao J, Hong Y, Wang C, Xiao T, He T, Locatello F, Wipf D, Fu Y, Zhang Z. 2022. Self-supervised amodal video object segmentation. 36th Conference on Neural Information Processing Systems. NeurIPS: Neural Information Processing Systems.
Yao, Jian, et al. “Self-Supervised Amodal Video Object Segmentation.” 36th Conference on Neural Information Processing Systems, 2022, doi:10.48550/arXiv.2210.12733.
All files available under the following license(s):
Copyright Statement:
This Item is protected by copyright and/or related rights. [...]
Link(s) to Main File(s)
Access Level
Open Access
Export
Marked PublicationsOpen Data ISTA Research Explorer
Sources
arXiv 2210.12733