Volume 2, Issue 3
Why Self-Attention Is Natural for Sequence-to-Sequence Problems? A Perspective from Symmetries

Chao Ma & Lexing Ying

J. Mach. Learn. , 2 (2023), pp. 194-210.

Published online: 2023-09

Export citation
  • Abstract

In this paper, we show that structures similar to self-attention are natural for learning many sequence-to-sequence problems from the perspective of symmetry. Inspired by language processing applications, we study the orthogonal equivariance of seq2seq functions with knowledge, which are functions taking two inputs — an input sequence and a knowledge — and outputting another sequence. The knowledge consists of a set of vectors in the same embedding space as the input sequence, containing the information of the language used to process the input sequence. We show that orthogonal equivariance in the embedding space is natural for seq2seq functions with knowledge, and under such equivariance, the function must take a form close to self-attention. This shows that network structures similar to self-attention are the right structures for representing the target function of many seq2seq problems. The representation can be further refined if a finite information principle is considered, or a permutation equivariance holds for the elements of the input sequence.

  • AMS Subject Headings

  • Copyright

COPYRIGHT: © Global Science Press

  • Email address
  • BibTex
  • RIS
  • TXT
@Article{JML-2-194, author = {Ma , Chao and Ying , Lexing}, title = {Why Self-Attention Is Natural for Sequence-to-Sequence Problems? A Perspective from Symmetries}, journal = {Journal of Machine Learning}, year = {2023}, volume = {2}, number = {3}, pages = {194--210}, abstract = {

In this paper, we show that structures similar to self-attention are natural for learning many sequence-to-sequence problems from the perspective of symmetry. Inspired by language processing applications, we study the orthogonal equivariance of seq2seq functions with knowledge, which are functions taking two inputs — an input sequence and a knowledge — and outputting another sequence. The knowledge consists of a set of vectors in the same embedding space as the input sequence, containing the information of the language used to process the input sequence. We show that orthogonal equivariance in the embedding space is natural for seq2seq functions with knowledge, and under such equivariance, the function must take a form close to self-attention. This shows that network structures similar to self-attention are the right structures for representing the target function of many seq2seq problems. The representation can be further refined if a finite information principle is considered, or a permutation equivariance holds for the elements of the input sequence.

}, issn = {2790-2048}, doi = {https://doi.org/10.4208/jml.221206}, url = {http://global-sci.org/intro/article_detail/jml/22012.html} }
TY - JOUR T1 - Why Self-Attention Is Natural for Sequence-to-Sequence Problems? A Perspective from Symmetries AU - Ma , Chao AU - Ying , Lexing JO - Journal of Machine Learning VL - 3 SP - 194 EP - 210 PY - 2023 DA - 2023/09 SN - 2 DO - http://doi.org/10.4208/jml.221206 UR - https://global-sci.org/intro/article_detail/jml/22012.html KW - Self attention, Symmetry, Orthogonal equivariance, Permutation equivariance. AB -

In this paper, we show that structures similar to self-attention are natural for learning many sequence-to-sequence problems from the perspective of symmetry. Inspired by language processing applications, we study the orthogonal equivariance of seq2seq functions with knowledge, which are functions taking two inputs — an input sequence and a knowledge — and outputting another sequence. The knowledge consists of a set of vectors in the same embedding space as the input sequence, containing the information of the language used to process the input sequence. We show that orthogonal equivariance in the embedding space is natural for seq2seq functions with knowledge, and under such equivariance, the function must take a form close to self-attention. This shows that network structures similar to self-attention are the right structures for representing the target function of many seq2seq problems. The representation can be further refined if a finite information principle is considered, or a permutation equivariance holds for the elements of the input sequence.

Chao Ma & Lexing Ying. (2023). Why Self-Attention Is Natural for Sequence-to-Sequence Problems? A Perspective from Symmetries. Journal of Machine Learning. 2 (3). 194-210. doi:10.4208/jml.221206
Copy to clipboard
The citation has been copied to your clipboard