Divided attention: Unsupervised multi-object discovery with contextually separated slots
Lao D, Hu Z, Locatello F, Yang Y, Soatto S. 2024. Divided attention: Unsupervised multi-object discovery with contextually separated slots. 1st Conference on Parsimony and Learning. CPAL: Conference on Parsimony and Learning.
Download
Conference Paper
| Published
| English
Author
Lao, Dong;
Hu, Zhengyang;
Locatello, FrancescoISTA ;
Yang, Yanchao;
Soatto, Stefano
Department
Abstract
We introduce a method to segment the visual field into independently moving regions, trained with no ground truth or supervision. It consists of an adversarial conditional encoder-decoder architecture based on Slot Attention, modified to use the image as context to decode optical flow without attempting to reconstruct the image itself. In the resulting multi-modal representation, one modality (flow) feeds the encoder to produce separate latent codes (slots), whereas the other modality (image) conditions the decoder to generate the first (flow) from the slots. This design frees the representation from having to encode complex nuisance variability in the image due to, for instance, illumination and reflectance properties of the scene. Since customary autoencoding based on minimizing the reconstruction error does not preclude the entire flow from being encoded into a single slot, we modify the loss to an adversarial criterion based on Contextual Information Separation. The resulting min-max optimization fosters the separation of objects and their assignment to different attention slots, leading to Divided Attention, or DivA. DivA outperforms recent unsupervised multi-object motion segmentation methods while tripling run-time speed up to 104FPS and reducing the performance gap from supervised methods to 12% or less. DivA can handle different numbers of objects and different image sizes at training and test time, is invariant to permutation of object labels, and does not require explicit regularization.
Publishing Year
Date Published
2024-01-03
Proceedings Title
1st Conference on Parsimony and Learning
Conference
CPAL: Conference on Parsimony and Learning
Conference Location
Hong Kong, China
Conference Date
2024-01-03 – 2024-01-03
IST-REx-ID
Cite this
Lao D, Hu Z, Locatello F, Yang Y, Soatto S. Divided attention: Unsupervised multi-object discovery with contextually separated slots. In: 1st Conference on Parsimony and Learning. ; 2024.
Lao, D., Hu, Z., Locatello, F., Yang, Y., & Soatto, S. (2024). Divided attention: Unsupervised multi-object discovery with contextually separated slots. In 1st Conference on Parsimony and Learning. Hong Kong, China.
Lao, Dong, Zhengyang Hu, Francesco Locatello, Yanchao Yang, and Stefano Soatto. “Divided Attention: Unsupervised Multi-Object Discovery with Contextually Separated Slots.” In 1st Conference on Parsimony and Learning, 2024.
D. Lao, Z. Hu, F. Locatello, Y. Yang, and S. Soatto, “Divided attention: Unsupervised multi-object discovery with contextually separated slots,” in 1st Conference on Parsimony and Learning, Hong Kong, China, 2024.
Lao D, Hu Z, Locatello F, Yang Y, Soatto S. 2024. Divided attention: Unsupervised multi-object discovery with contextually separated slots. 1st Conference on Parsimony and Learning. CPAL: Conference on Parsimony and Learning.
Lao, Dong, et al. “Divided Attention: Unsupervised Multi-Object Discovery with Contextually Separated Slots.” 1st Conference on Parsimony and Learning, 2024.
All files available under the following license(s):
Copyright Statement:
This Item is protected by copyright and/or related rights. [...]
Main File(s)
File Name
2024_CPAL_Lao.pdf
8.04 MB
Access Level
Open Access
Date Uploaded
2024-02-12
MD5 Checksum
8fad894c34f1b3d5a14fb8ffb12f7277
Export
Marked PublicationsOpen Data ISTA Research Explorer
Sources
arXiv 2304.01430