Volume 2, Issue 3
Efficient Anti-Symmetrization of a Neural Network Layer by Taming the Sign Problem

Nilin Abrahamsen & Lin Lin

J. Mach. Learn. , 2 (2023), pp. 211-240.

Published online: 2023-09

[An open-access article; the PDF is free to any online user.]

Export citation
  • Abstract

Explicit antisymmetrization of a neural network is a potential candidate for a universal function approximator for generic antisymmetric functions, which are ubiquitous in quantum physics. However, this procedure is a priori factorially costly to implement, making it impractical for large numbers of particles. The strategy also suffers from a sign problem. Namely, due to near-exact cancellation of positive and negative contributions, the magnitude of the antisymmetrized function may be significantly smaller than before antisymmetrization. We show that the anti-symmetric projection of a two-layer neural network can be evaluated efficiently, opening the door to using a generic anti-symmetric layer as a building block in anti-symmetric neural network Ansatzes. This approximation is effective when the sign problem is controlled, and we show that this property depends crucially the choice of activation function under standard Xavier/He initialization methods. As a consequence, using a smooth activation function requires rescaling of the neural network weights compared to standard initializations.

  • AMS Subject Headings

  • Copyright

COPYRIGHT: © Global Science Press

  • Email address
  • BibTex
  • RIS
  • TXT
@Article{JML-2-211, author = {Abrahamsen , Nilin and Lin , Lin}, title = {Efficient Anti-Symmetrization of a Neural Network Layer by Taming the Sign Problem}, journal = {Journal of Machine Learning}, year = {2023}, volume = {2}, number = {3}, pages = {211--240}, abstract = {

Explicit antisymmetrization of a neural network is a potential candidate for a universal function approximator for generic antisymmetric functions, which are ubiquitous in quantum physics. However, this procedure is a priori factorially costly to implement, making it impractical for large numbers of particles. The strategy also suffers from a sign problem. Namely, due to near-exact cancellation of positive and negative contributions, the magnitude of the antisymmetrized function may be significantly smaller than before antisymmetrization. We show that the anti-symmetric projection of a two-layer neural network can be evaluated efficiently, opening the door to using a generic anti-symmetric layer as a building block in anti-symmetric neural network Ansatzes. This approximation is effective when the sign problem is controlled, and we show that this property depends crucially the choice of activation function under standard Xavier/He initialization methods. As a consequence, using a smooth activation function requires rescaling of the neural network weights compared to standard initializations.

}, issn = {2790-2048}, doi = {https://doi.org/10.4208/jml.230703}, url = {http://global-sci.org/intro/article_detail/jml/22013.html} }
TY - JOUR T1 - Efficient Anti-Symmetrization of a Neural Network Layer by Taming the Sign Problem AU - Abrahamsen , Nilin AU - Lin , Lin JO - Journal of Machine Learning VL - 3 SP - 211 EP - 240 PY - 2023 DA - 2023/09 SN - 2 DO - http://doi.org/10.4208/jml.230703 UR - https://global-sci.org/intro/article_detail/jml/22013.html KW - Fermions, Sign problem, Neural quantum states. AB -

Explicit antisymmetrization of a neural network is a potential candidate for a universal function approximator for generic antisymmetric functions, which are ubiquitous in quantum physics. However, this procedure is a priori factorially costly to implement, making it impractical for large numbers of particles. The strategy also suffers from a sign problem. Namely, due to near-exact cancellation of positive and negative contributions, the magnitude of the antisymmetrized function may be significantly smaller than before antisymmetrization. We show that the anti-symmetric projection of a two-layer neural network can be evaluated efficiently, opening the door to using a generic anti-symmetric layer as a building block in anti-symmetric neural network Ansatzes. This approximation is effective when the sign problem is controlled, and we show that this property depends crucially the choice of activation function under standard Xavier/He initialization methods. As a consequence, using a smooth activation function requires rescaling of the neural network weights compared to standard initializations.

Abrahamsen , Nilin and Lin , Lin. (2023). Efficient Anti-Symmetrization of a Neural Network Layer by Taming the Sign Problem. Journal of Machine Learning. 2 (3). 211-240. doi:10.4208/jml.230703
Copy to clipboard
The citation has been copied to your clipboard