Year: 2023
Author: Chao Ma, Lexing Ying
Journal of Machine Learning, Vol. 2 (2023), Iss. 3 : pp. 194–210
Abstract
In this paper, we show that structures similar to self-attention are natural for learning many sequence-to-sequence problems from the perspective of symmetry. Inspired by language processing applications, we study the orthogonal equivariance of seq2seq functions with knowledge, which are functions taking two inputs — an input sequence and a knowledge — and outputting another sequence. The knowledge consists of a set of vectors in the same embedding space as the input sequence, containing the information of the language used to process the input sequence. We show that orthogonal equivariance in the embedding space is natural for seq2seq functions with knowledge, and under such equivariance, the function must take a form close to self-attention. This shows that network structures similar to self-attention are the right structures for representing the target function of many seq2seq problems. The representation can be further refined if a finite information principle is considered, or a permutation equivariance holds for the elements of the input sequence.
Journal Article Details
Publisher Name: Global Science Press
Language: English
DOI: https://doi.org/10.4208/jml.221206
Journal of Machine Learning, Vol. 2 (2023), Iss. 3 : pp. 194–210
Published online: 2023-01
AMS Subject Headings:
Copyright: COPYRIGHT: © Global Science Press
Pages: 17
Keywords: Self attention Symmetry Orthogonal equivariance Permutation equivariance.