EmbodiedMidtrain: Bridging the Gap between Vision-Language Models and Vision-Language-Action Models via Mid-training
Abstract
EmbodiedMidtrain addresses the gap between vision-language models and vision-language-action models by using a mid-training approach that selects VLA-aligned data to improve downstream robot manipulation performance.
Vision-Language-Action Models (VLAs) inherit their visual and linguistic capabilities from Vision-Language Models (VLMs), yet most VLAs are built from off-the-shelf VLMs that are not adapted to the embodied domain, limiting their downstream performance. In this work, we propose EmbodiedMidtrain to bridge the gap between VLMs and VLAs. We first characterize the data distribution gap between them, showing that VLA data occupy compact regions that are largely separated from the broader VLM distribution, while the degree of alignment varies substantially both across and within VLM data sources. Then, we build a mid-training data engine that leverages a lightweight learnable proximity estimator to select the most VLA-aligned candidates from a large VLM pool, and mid-trains the VLM on this curated mixture before downstream VLA fine-tuning. Experiments on three robot manipulation benchmarks show that mid-training consistently improves performance across different VLM backbones, achieving results competitive with expert VLAs and off-the-shelf VLMs trained with larger model scale and training budgets. Further analysis reveals that mid-training provides a stronger initialization for VLA fine-tuning, with gains emerging from the earliest steps and widening throughout training. Moreover, the data engine captures both dataset-level and sample-level alignment signals, favoring spatial reasoning over text-centric tasks while preserving the diversity of the VLM data. We will release all code, data and models for future research.
Community
Excited to share our work, EmbodiedMidtrain!
Most VLAs are built on off-the-shelf VLMs that were never adapted to the embodied domain. We characterize this as a data distribution gap: VLA data form compact clusters largely separated from the broader VLM distribution, and the gap is heterogeneous — some VLM samples are inherently much closer to the VLA domain than others.
We propose a lightweight mid-training data engine: a learnable proximity estimator on frozen VLM features scores each VLM sample by its closeness to the VLA distribution, and the top-scoring subset is used to mid-train the VLM before VLA fine-tuning. Across Calvin ABC-D, SimplerEnv-Bridge, and LIBERO-10, our 1.1B mid-trained model becomes competitive with VLAs built on 3-8× larger backbones. The same curated mixture also transfers across architectures (InternVL3.5-1B → Qwen3VL-2B), and the gains emerge from the earliest fine-tuning steps and widen over time.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- RoboAlign: Learning Test-Time Reasoning for Language-Action Alignment in Vision-Language-Action Models (2026)
- StarVLA-$\alpha$: Reducing Complexity in Vision-Language-Action Systems (2026)
- StarVLA: A Lego-like Codebase for Vision-Language-Action Model Developing (2026)
- Enhancing Linguistic Generalization of VLA: Fine-Tuning OpenVLA via Synthetic Instruction Augmentation (2026)
- Pri4R: Learning World Dynamics for Vision-Language-Action Models with Privileged 4D Representation (2026)
- LinguDistill: Recovering Linguistic Ability in Vision- Language Models via Selective Cross-Modal Distillation (2026)
- PokeVLA: Empowering Pocket-Sized Vision-Language-Action Model with Comprehensive World Knowledge Guidance (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Get this paper in your agent:
hf papers read 2604.20012 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper