Masked Siamese ConvNets
Abstract
Self-supervised learning has shown superior performances over supervised methods on various vision benchmarks. The siamese network, which encourages embeddings to be invariant to distortions, is one of the most successful self-supervised visual representation learning approaches. Among all the augmentation methods, masking is the most general and straightforward method that has the potential to be applied to all kinds of input and requires the least amount of domain knowledge. However, masked siamese networks require particular inductive bias and practically only work well with Vision Transformers. This work empirically studies the problems behind masked siamese networks with ConvNets. We propose several empirical designs to overcome these problems gradually. Our method performs competitively on low-shot image classification and outperforms previous methods on object detection benchmarks. We discuss several remaining issues and hope this work can provide useful data points for future general-purpose self-supervised learning.
Cited in this thesis
Frequently Cited Together
- Minimally Invasive Evaluation of Venous Leg Ulcers in an Outpatient Setting Usin1 chapter
- Fish mislabelling in France: substitution rates and retail types1 chapter
- DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learni1 chapter
- Application of rapid evaporative ionization mass spectrometry in preclinical and1 chapter
- Qualitative and quantitative analysis of adulterated Antarctic Krill Oil (AKO) b1 chapter
- DNA barcoding reveals mislabeling of endangered sharks sold as swordfish in New 1 chapter
BibTeX
@article{Jing2022,
author = {Jing, Li and Zhu, Jiachen and LeCun, Yann},
journal = {arXiv preprint arXiv:2206.07700},
title = {Masked siamese convnets},
year = {2022},
}