Length Value Model: Scalable Value Pretraining for Token-Level Length Modeling
Abstract
A token-level framework called Length Value Model (LenVM) is presented that estimates remaining generation length by treating it as a value estimation problem, enabling improved length control and efficiency in autoregressive models.
Token serves as the fundamental unit of computation in modern autoregressive models, and generation length directly influences both inference cost and reasoning performance. Despite its importance, existing approaches lack fine-grained length modeling, operating primarily at the coarse-grained sequence level. We introduce the Length Value Model (LenVM), a token-level framework that models the remaining generation length. By formulating length modeling as a value estimation problem and assigning a constant negative reward to each generated token, LenVM predicts a bounded, discounted return that serves as a monotone proxy for the remaining generation horizon. This formulation yields supervision that is annotation-free, dense, unbiased, and scalable. Experiments on LLMs and VLMs demonstrate LenVM provides a highly effective signal at inference time. On the LIFEBench exact length matching task, applying LenVM to a 7B model improves the length score from 30.9 to 64.8, significantly outperforming frontier closed-source models. Furthermore, LenVM enables continuous control over the trade off between performance and efficiency. On GSM8K at a budget of 200 tokens, LenVM maintains 63% accuracy compared to 6 percent for token budget baseline. It also accurately predicts total generation length from the prompt boundary. Finally, LenVM's token-level values offer an interpretable view of generation dynamics, revealing how specific tokens shift reasoning toward shorter or longer regimes. Results demonstrate that LenVM supports a broad range of applications and token length can be effectively modeled as a token-level value signal, highlighting the potential of LenVM as a general framework for length modeling and as a length-specific value signal that could support future RL training. Code is available at https://github.com/eric-ai-lab/Length-Value-Model.
Community
Try our demo to see the impact of each token on the expected length.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Unleashing Implicit Rewards: Prefix-Value Learning for Distribution-Level Optimization (2026)
- Reward Models Are Secretly Value Functions: Temporally Coherent Reward Modeling (2026)
- Shorter Thoughts, Same Answers: Difficulty-Scaled Segment-Wise RL for CoT Compression (2026)
- SPPO: Sequence-Level PPO for Long-Horizon Reasoning Tasks (2026)
- TARo: Token-level Adaptive Routing for LLM Test-time Alignment (2026)
- TRIMS: Trajectory-Ranked Instruction Masked Supervision for Diffusion Language Models (2026)
- MARS: Enabling Autoregressive Models Multi-Token Generation (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
