Towards Human-like Attention

Overview 

Convolutional Neural Networks (CNNs) can already reach human performance on clean images, but is not as robust. Recently proposed self-attention seems to help robustness, but still fails at different cases. However, previous studies do show a close relationship between attention and robustness in human visual system. We hypothesize that attention is the key to robustness, only self-attention is not the right formulation for it. We propose to study the neuronal foundation of human visual attention, and propose a human-like attention mechanism to reach higher robustness.

Researchers

  • Baifeng Shi, UC Berkeley
  • Trevor Darrell, UC Berkeley
  • Yale Song, Microsoft
  • Neel Joshi, Microsoft
  • Xin Wang, Microsoft