Efficient Anti-Symmetrization of a Neural Network Layer by Taming the Sign Problem

Year:    2023

Author:    Nilin Abrahamsen, Lin Lin

Journal of Machine Learning, Vol. 2 (2023), Iss. 3 : pp. 211–240

Abstract

Explicit antisymmetrization of a neural network is a potential candidate for a universal function approximator for generic antisymmetric functions, which are ubiquitous in quantum physics. However, this procedure is a priori factorially costly to implement, making it impractical for large numbers of particles. The strategy also suffers from a sign problem. Namely, due to near-exact cancellation of positive and negative contributions, the magnitude of the antisymmetrized function may be significantly smaller than before antisymmetrization. We show that the anti-symmetric projection of a two-layer neural network can be evaluated efficiently, opening the door to using a generic anti-symmetric layer as a building block in anti-symmetric neural network Ansatzes. This approximation is effective when the sign problem is controlled, and we show that this property depends crucially the choice of activation function under standard Xavier/He initialization methods. As a consequence, using a smooth activation function requires rescaling of the neural network weights compared to standard initializations.

Journal Article Details

Publisher Name:    Global Science Press

Language:    English

DOI:    https://doi.org/10.4208/jml.230703

Journal of Machine Learning, Vol. 2 (2023), Iss. 3 : pp. 211–240

Published online:    2023-01

AMS Subject Headings:   

Copyright:    COPYRIGHT: © Global Science Press

Pages:    30

Keywords:    Fermions Sign problem Neural quantum states.

Author Details

Nilin Abrahamsen

Lin Lin