Rethinking OPD
Collection
This collection includes the models used in the paper "Rethinking On-Policy Distillation of Large Language Models: Phenomenology, Mechanism, and Recip • 3 items • Updated
Qwen3-4B-Base-GRPO is a post-RL checkpoint trained with the verl framework. It starts from Qwen3-4B-Base and applies GRPO on the DAPO-Math-17k-Processed dataset for mathematical reasoning and problem-solving.
This model is associated with the paper:
Rethinking On-Policy Distillation of Large Language Models: Phenomenology, Mechanism, and Recipe
Paper link: https://arxiv.org/abs/2604.13016
This model is obtained by applying GRPO reinforcement learning to Qwen3-4B-Base with verl. The training is intended to improve math-focused reasoning performance under the on-policy distillation setting.
reward_model.enable: false)grpo1.0reward_model.enable: false)DAPO-Math-17k-Processeddatasets/DAPO-Math-17k-Processed/DAPO-Math.parquetAIME25, AMC23, AIME241024716831744327681.01.0token-mean1e-664118120 steps20 stepsDAPO-Math-17k-ProcessedAIME25, AMC23, AIME24from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "lllyx/Qwen3-4B-Base-GRPO"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype="auto",
device_map="auto",
)
If you use this model, please consider citing the related paper:
@article{li2026rethinking,
title={Rethinking On-Policy Distillation of Large Language Models: Phenomenology, Mechanism, and Recipe},
author={Li, Yaxuan and Zuo, Yuxin and He, Bingxiang and Zhang, Jinqian and Xiao, Chaojun and Qian, Cheng and Yu, Tianyu and Gao, Huan-ang and Yang, Wenkai and Liu, Zhiyuan and Ding, Ning},
journal={arXiv preprint arXiv:2604.13016},
year={2026}
}
Base model
Qwen/Qwen3-4B-Base