Self Supervised Semantic Segmentation in the Wild

Overview 

Self-supervised learning (SSL) enables the learning of effective task-agnostic representations that generalize to a wide range of downstream applications. Recent advances in SSL have adopted strong augmentation pipelines combined with pretext tasks to achieve results competitive with supervised learning while using a fraction of the labels. The goal of this project is to transfer the success of SSL to real-world applications. SSL in the wild specifically aims to tackle semantic segmentation where pixel-level annotation can be hard and time-consuming for humans.

Researchers

  • Colorado Reed, UC Berkeley
  • Tete Xiao, UC Berkeley
  • Trevor Darrell, UC Berkeley
  • Konstantinos Kallidromitis, Panasonic
  • Kazuki Kozuka, Panasonic
  • Yusuke Kato, Panasonic

Additional Details

The benchmark chosen for the project is Berkeley Deep Drive 100k dataset (BDD100K) since it contains a plethora of different weather conditions, illuminations, and objects. Our baseline is expected to produce masks that outperform previous state-of-the-art and are robust to real-world noise in the data. Finally, Panasonic aims to internally implement the algorithm to a large driving fisheye lens dataset and extract results competitive to supervised learning with ~10% of the labels.