qwen-dpo-v13
This model is a fine-tuned version of motobrew/qwen-dpo-v3 using Direct Preference Optimization (DPO) via the Unsloth library.
Training Objective
This model has been optimized using DPO to align its responses with preferred outputs, focusing on improving reasoning (Chain-of-Thought) and structured response quality based on the provided preference dataset.
Training Configuration
- Base model: motobrew/qwen-dpo-v3
- Method: DPO (Direct Preference Optimization)
- Epochs: 1
- Learning rate: 2e-06
- Beta: 0.05
- Max sequence length: 1024
Usage
You can use this model directly with transformers.
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "motobrew/qwen-dpo-v13"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
device_map="auto"
)
# Test inference
prompt = "Your question here"
inputs = tokenizer.apply_chat_template([{"role": "user", "content": prompt}], tokenize=True, add_generation_prompt=True, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0]))
Sources & License (IMPORTANT)
- Training Data: [motobrew/alf-dpo-from-top-alf93-v0]
- License: MIT License. (As per dataset terms).
- Compliance: Users must follow the original base model's license terms.
- Downloads last month
- 4
Model tree for motobrew/qwen-dpo-v13
Base model
Qwen/Qwen3-4B-Instruct-2507 Finetuned
motobrew/qwen3-adv-comp-v34 Finetuned
motobrew/qwen-dpo-v3