Abstract: Masked Image Modeling (MIM) is a promising self-supervised learning approach that enables learning from unlabeled images. Despite its recent success, learning good representations through MIM remains challenging because it requires predicting the right semantic content in accurate locations. For example, given an incomplete picture of a dog, we can guess that there is a tail, but we cannot determine its exact location (a). To address this, we follow LeCun et. al (b), that suggested the use of a latent variable to capture uncertainties.
Incorporating location uncertainty to MIM by using stochastic positional embeddings. We propose to incorporate location uncertainty to MIM by using stochastic positional embeddings (StoP). In this case, the latent variable Z is gaussian noise which is used to limit the information content of positional embeddings. Specifically, we condition the model on stochastic masked token positions drawn from a gaussian distribution. We show that using StoP reduces overfitting to location features and guides the model toward learning features that are more robust to location uncertainties.
Quantitatively, using StoP improves downstream MIM performance on a variety of downstream tasks. For example, linear probing on ImageNet using ViT-B is improved by +1.7, and by 2.5 for ViT-H using 1% of the data. By conditioning on stochastic masked tokens positions, our model learns features that are more robust to location uncertainties. The effectiveness of this approach is demonstrated on various datasets and downstream tasks, outperforming existing MIM methods and highlighting its potential for self-supervised learning. Based on our experiments and visualizations, by modeling location uncertainties with StoP, models suffer less from overfitting to location features.
- Amir Bar (BAIR)
- Trevor Darrell (BAIR)
- Yann LeCun (Meta AI)