How Much Is One Recurrence Worth? Iso-Depth Scaling Laws for Looped Language Models
Abstract
Research quantifies the computational value of recurrent connections in language models through a scaling law that establishes a recurrence-equivalence exponent of 0.46, indicating that additional recurrence provides partial but measurable capacity gains.
We measure how much one extra recurrence is worth to a looped (depth-recurrent) language model, in equivalent unique parameters. From an iso-depth sweep of 116 pretraining runs across recurrence counts r in {1, 2, 4, 8} spanning {sim}50times in training compute, we fit a joint scaling law L = E + A,(N_once + r^φ N_rec)^{-α} + B,D^{-β} and recover a new recurrence-equivalence exponent φ= 0.46. Intuitively, φ tells us whether looping a block r times is equivalent in validation loss to r unique blocks of a non-looped model (full equivalence, φ{=}1) or to a single block run repeatedly with no capacity gain (φ{=}0). Our φ= 0.46 sits in between, so each additional recurrence predictably increases validation loss at matched training compute. For example, at r{=}4 a 410M looped model performs on par with a 580M non-looped model, but incurs the training cost of a 1B non-looped one. We demonstrate the utility of φ as a measurement tool on two probes. Truncated backpropagation lowers φ to 0.38, indicating that the loop mechanism is poorly trained under truncation, even though validation loss decreases. Conversely, hyperconnections raise φ to 0.65, a genuine capacity gain. Our method applies to any looped LM and separates true loop improvements from token-budget gains.
Community
We measure how much one extra recurrence is worth to a looped (depth-recurrent) language model, in equivalent unique parameters. We quantify this by conducting Iso-depth scaling laws across multiple recurrences. At 4 recursions, a 410M looped model performs on par with a 580M non-looped model, but incurs the training cost of a 1B non-looped one.
Our method can quantify how much unique parameter capacity can be recovered by a specific looped LM architecture. We show that the commonly applied method of truncated backpropagation through time weakens the power of loops due to inaccurate gradients. Hyperconnections between loop states substantially improve the loops.
Get this paper in your agent:
hf papers read 2604.21106 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper