Learning Selective Invariance Upon Parametric Density Functions of Transformations



Symmetry is a powerful tool in the deep learning repertoire. Natural data displays structured variations due to naturally occurring symmetries, so modeling these symmetries greatly simplifies learning. Indeed, one of the key factors behind the success of Convolutional Neural Networks is their ability to capture the translational symmetry of image data through layers that are equivariant or invariant to the group of image translations.

Existing works [1,2,3] create models with equivariant/invariant layers for a wide variety of transformation groups relevant to vision [4]. These constructions help the models generalize on less training data with significantly fewer parameters [3]. 

These developments inspire hope for ecological invariance, where the goal is to create models that generalize across transformations found in natural data. This capability is invaluable in computer vision applications but is exceptionally challenging because: (a) Natural data contains a diverse range of complex transformations, and (b) The type of invariance desired depends on the instance and its context (see Fig. 1), calling for selective invariance

Existing methods have three drawbacks:

  1. Layers are hand-designed for each kind of transformation group. This can be time-consuming, and there are no general performance guarantees.

  2. Invariance is hardcoded into the model architecture, so the models are not adaptable

  3. For complex, non-rigid objects like humans, the set of possible transformations increases drastically in complexity, making these approaches unscalable

 We aim to combine the work on learning and parametrizing probability distributions by Murphy et al. [8] with recent developments in implicit neural networks [5,6,7] in order to create a flexible, scalable, and context-aware approach to invariance. 

Acknowledgements: We thank Google for providing computing resources in form of GCP credits.


[1]: Taco S. Cohen et al: Group Equivariant Convolutional Networks, ICML 2016

[2]: Marc Finzi et al: Generalizing Convolutional Neural Networks for Equivariance to Lie Groups on Arbitrary Continuous Data, ICML 2020

[3]: Rudrasis Chakraborty et al:SurReal: Complex-Valued Learning as Principled Transformations on a Scaling and Rotation Manifold, TNNLS, November 2020

[4]: Ethan Eade, Lie Groups for Computer Vision

[5]: L.E. Ghaoui et al; Implicit Deep Learning 

[6]: Shaojie Bai et al: Deep Equilibrium Models, NeurIPS 2019

[7]: Stephen Gould et al: Deep Declarative Networks: A New Hope

[8]: Kieran Murphy et al: Implicit-PDF: Non-Parametric Representation of Probability Distributions on the Rotation Manifold, ICML 2021