Qwen2.5-Omni-7B LoRA โ€” hiugaai (Cantonese)

LoRA adapter fine-tuned on multimodal_yue_benchmark (Cantonese audio + text), speaker hiugaai.

Base model

Load with Qwen/Qwen2.5-Omni-7B as model_name_or_path, then load this repo as the PEFT adapter.

Training

  • Framework: LLaMA-Factory
  • Method: LoRA (r=8), bf16, DeepSpeed ZeRO-2
  • Dataset: wanlung_train / hiumaan_train / hiugaai_train (single-speaker split)

Inference (Transformers + PEFT)

from transformers import AutoProcessor, Qwen2_5OmniForConditionalGeneration
from peft import PeftModel
import torch

base = "Qwen/Qwen2.5-Omni-7B"
adapter = "J017athan/Qwen2.5-Omni-7B-4.8k-hiugaai"

processor = AutoProcessor.from_pretrained(base, trust_remote_code=True)
model = Qwen2_5OmniForConditionalGeneration.from_pretrained(
    base, torch_dtype=torch.bfloat16, device_map="auto", trust_remote_code=True
)
model = PeftModel.from_pretrained(model, adapter)

Or use LLaMA-Factory adapter_name_or_path pointing to this repo.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for J017athan/Qwen2.5-Omni-7B-4.8k-hiugaai

Adapter
(21)
this model

Collection including J017athan/Qwen2.5-Omni-7B-4.8k-hiugaai