Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled
This repository contains the merged 16-bit supervised fine-tuning checkpoint and GGUF exports for a dense Qwen 3.5 reasoning model based on unsloth/Qwen3.5-27B. It is adapted from the official Unsloth A100 (80GB) notebook and trained on a cleaned subset of nohurry/Opus-4.6-Reasoning-3000x-filtered.
Repository contents
- Merged 16-bit model weights for Transformers / vLLM-style deployment
- GGUF exports for
llama.cpp-compatible runtimes - GGUF quantizations uploaded by the notebook:
q4_k_m,q8_0,q5_k_m
Training data
Raw dataset statistics:
- Total rows in
train: 2,326 - Columns:
id,problem,thinking,solution,difficulty,category,timestamp,hash
Cleaning and formatting applied in the notebook:
- Removed 18 rows with an empty
problem,thinking, orsolution - Removed 92 meta / incomplete-prompt responses
- Removed 33 duplicate
idrows - Final training set size: 2,183 conversations
- Category mix after cleaning: 2,052
math, 131code
Each training example is converted to Qwen chat format with the assistant target built as:
<think>
{thinking}
</think>
{solution}
Loss is applied only to assistant tokens via train_on_responses_only.
Training procedure
- Base model:
unsloth/Qwen3.5-27B - Reference notebook: Unsloth
Qwen_3_5_27B_A100(80GB).ipynb - Frameworks: Unsloth, TRL, Transformers, PEFT
- Task: supervised fine-tuning for long-form reasoning / answer generation
max_seq_length: 4096- LoRA rank: 8
- LoRA target modules:
q_proj,k_proj,v_proj,o_proj,gate_proj,up_proj,down_proj,out_proj per_device_train_batch_size: 4gradient_accumulation_steps: 2num_train_epochs: 1learning_rate: 2e-4warmup_steps: 20optim:adamw_8bitweight_decay: 0.001lr_scheduler_type:linear- Seed: 3407
Intended use
This repository is intended for research and experimentation on reasoning-style text generation, especially mixed math and code-oriented prompts that benefit from multi-step intermediate reasoning. The merged checkpoint is suitable for Transformers / vLLM-style serving, and the GGUF files are intended for llama.cpp-compatible runtimes.
Limitations
- Quantized GGUF variants may behave differently from the merged 16-bit checkpoint.
- The model was trained on explicit reasoning traces and may emit visible
<think>sections or long intermediate reasoning. - No formal evaluation or benchmark scores are included in this release.
Usage
Transformers / merged checkpoint:
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo_id = "Ayodele01/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled"
tokenizer = AutoTokenizer.from_pretrained(repo_id)
model = AutoModelForCausalLM.from_pretrained(
repo_id,
torch_dtype=torch.bfloat16 if torch.cuda.is_available() and torch.cuda.is_bf16_supported() else torch.float16,
device_map="auto",
)
messages = [
{"role": "user", "content": "Solve: If 3x + 5 = 20, what is x?"}
]
inputs = tokenizer.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(
inputs,
max_new_tokens=512,
do_sample=False,
)
print(tokenizer.decode(outputs[0], skip_special_tokens=False))
GGUF / llama.cpp:
./llama-cli -m Qwen3.5-27B-opus46-reasoning.Q4_K_M.gguf -p "Solve: If 3x + 5 = 20, what is x?" -n 512
Provenance
- Base model: unsloth/Qwen3.5-27B
- Dataset: nohurry/Opus-4.6-Reasoning-3000x-filtered
- Reference notebook source: unslothai/notebooks
- Downloads last month
- 433