Papers
arxiv:2410.01131

nGPT: Normalized Transformer with Representation Learning on the Hypersphere

Published on Oct 1
Authors:
,
,
,

Abstract

We propose a novel neural network architecture, the normalized Transformer (nGPT) with representation learning on the hypersphere. In nGPT, all vectors forming the embeddings, MLP, attention matrices and hidden states are unit norm normalized. The input stream of tokens travels on the surface of a hypersphere, with each layer contributing a displacement towards the target output predictions. These displacements are defined by the MLP and attention blocks, whose vector components also reside on the same hypersphere. Experiments show that nGPT learns much faster, reducing the number of training steps required to achieve the same accuracy by a factor of 4 to 20, depending on the sequence length.

Community

Hi, this was a wonderful read. We summarised this paper and a few others in our biweekly blog.

  1. nGPT: Normalized Transformer with Representation Learning on the Hypersphere
  2. LAUREL: Learned Augmented Residual Layer
  3. TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters

Please give a read and share your thoughts/feedback.

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2410.01131 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2410.01131 in a Space README.md to link it from this page.

Collections including this paper 5