Encoding Involutory Invariance in Neural Networks

Citation:

Anwesh Bhattacharya, Marios Mattheakis, and Pavlos Protopapas. 2022. “Encoding Involutory Invariance in Neural Networks.” In IJCNN at IEEE World Congress on Computational Intelligence. Publisher's Version Copy at https://tinyurl.com/y72zsyr3
2106.12891.pdf1.54 MB

Abstract:

In certain situations, Neural Networks (NN) are trained upon data that obey underlying physical symmetries. However, it is not guaranteed that NNs will obey the underlying symmetry unless embedded in the network structure. In this work, we explore a special kind of symmetry where functions are invariant with respect to involutory linear/affine transformations up to parity p = ±1. We develop mathe- matical theorems and propose NN architectures that ensure invariance and universal approximation properties. Numerical experiments indicate that the proposed mod- els outperform baseline networks while respecting the imposed symmetry. An adaption of our technique to convolutional NN classification tasks for datasets with inherent horizontal/vertical reflection symmetry has also been proposed.

Last updated on 04/26/2022