Papers
arxiv:2605.06546

Efficient Pre-Training with Token Superposition

Published on May 7
· Submitted by
Bowen Peng
on May 13

Abstract

Token-Superposition Training (TST) improves pre-training efficiency by combining contiguous tokens into bags during a superposition phase with multi-hot cross-entropy objective, achieving faster training times without architectural changes.

AI-generated summary

Pre-training of Large Language Models is often prohibitively expensive and inefficient at scale, requiring complex and invasive modifications in order to achieve high data throughput. In this work, we present Token-Superposition Training (TST), a simple drop-in method that significantly improves the data throughput per FLOPs during pre-training without modifying the parallelism, optimizer, tokenizer, data, or model architecture. TST is done in two phases: (i) A highly efficient superposition phase where we combine many contiguous tokens into one bag and train using a multi-hot cross-entropy (MCE) objective, and (ii) a recovery phase where we revert back to standard training. We extensively evaluate TST on the scale of 270M and 600M parameters and validate on 3B and a 10B A1B mixture of experts model, demonstrating that it is highly robust in different settings. Ultimately, TST consistently outperforms baseline loss and downstream evaluations, and under equal-loss settings, TST yields up to a 2.5x reduction in total pre-training time at the 10B A1B scale.

Community

Paper author Paper submitter

Token-Superposition Training (TST) is a simple two-phase pre-training method that improves data throughput per FLOP without modifying the model architecture, optimizer, or tokenizer. In the first phase, contiguous tokens are averaged into "superposed" embeddings and trained with a multi-hot cross-entropy loss that predicts the next bag of tokens; in the second phase, training reverts to standard next-token prediction. At the 10B active-1B MoE scale, TST achieves a 2.5× reduction in pre-training time to reach the same loss as the baseline, while also improving downstream performance on benchmarks such as HellaSwag, ARC, and MMLU.

The methods presented here are extremely similar to Beyond Next Token Prediction: Patch-Level Training for Large Language Models, which does not appear to be cited.

·

We just were made aware of this prior work that is very similar to ours, differing only in theory and execution. Seems to be a case of convergent research, very unfortunate but we will do our best to give priority and update the paper accordingly. Though we strongly invite people to try it out, as we never seen any mention of pre-training this way at scale.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 2

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2605.06546 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2605.06546 in a Space README.md to link it from this page.

Collections including this paper 1