The objective of this proposed collaboration is to explore self-supervised learning beyond the current paradigm of exploiting instance discrimination and contrastive learning, as current exploitation-driven research may be spiraling around a local optimum, while larger-scale algorithmic changes are needed.
Tete Xiao, UC Berkeley, http://tetexiao.com/
Piotr Dollár, Facebook AI Research, https://pdollar.github.io/
Ross Girshick, Facebook AI Research, https://www.rossgirshick.info/
Trevor Darrell, UC Berkeley, https://people.eecs.berkeley.edu/~trevor/
Self-supervised learning, which uses raw image data and labels derived from it without relying on human supervision, has become increasingly popular as numerous shortcomings of supervised learning have become apparent. Research on self-supervised learning has led to rapid progress on the most commonly used benchmark. This exploitation continues at an astonishing pace.
Despite this remarkable progress, self-supervised learning has not yet delivered on two of its most important promises over the weakness of supervised learning: 1) scalability with respect to unlimited data, and 2) generality of the learned features. Existing studies on the scaling properties of self-supervised learning demonstrate only minor gains when increasing the unlabeled data scale by three orders of magnitude (from 1m to 1b images), which undermines the key motivation that unsupervised learning can benefit from data beyond the realm of what can be feasibly labeled by hand.
The objective of this proposed collaboration is to explore self-supervised learning beyond the current paradigm of exploiting instance discrimination and contrastive learning. The success of the algorithms developed will be judged based on well-established benchmarks for image classification, object detection, and segmentation.