shadow-peft-models
Collection
pretrained weights and data for the ShadowPEFT paper • 30 items • Updated • 3
ShadowPEFT is a parameter-efficient fine-tuning (PEFT) framework that augments a frozen large base model with a lightweight, centralized, pretrainable, and detachable Shadow network. The shadow network runs in parallel with the base model, injecting learned corrections into each decoder layer to enable effective adaptation with a fraction of the parameters.
Since the shadow module is architecturally decoupled from the backbone, it can be trained, stored, and deployed as a standalone component, benefiting edge computing and modular deployment.
To use ShadowPEFT, first install the package:
pip install shadow-peft
Then you can wrap a base model with a Shadow adapter:
from transformers import AutoModelForCausalLM, AutoTokenizer
from shadow_peft import get_shadow_model, ShadowConfig
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-0.6B")
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-0.6B")
# Wrap the base model with a Shadow adapter (1-layer implicit shadow)
model = get_shadow_model(model, ShadowConfig(num_shadow_layers=1))
model.print_trainable_parameters()
# Only shadow-related parameters are trainable; base model is frozen.
@article{li2026shadowpeft,
title={ShadowPEFT: Shadow Network for Parameter-Efficient Fine-Tuning},
author={Xianming Li and Zongxi Li and Tsz-fung Andrew Lee and Jing Li and Haoran Xie and Qing Li},
year={2026},
eprint={2604.19254},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2604.19254},
}