Qwen3.5-40B-Claude-4.6-Opus-Deckard-Heretic-Uncensored-Thinking-Zeroclaw-GGUF
Derivative of Qwen3.5-40B-Claude-4.6-Opus-Deckard-Heretic-Uncensored-Thinking, fine-tuned on data/zeroclaw_training_data.jsonl with QLoRA and quantized using MagicQuant hybrid evolutionary per-tensor search.
Base Model
This is a derivative of Qwen3.5-40B-Claude-4.6-Opus-Deckard-Heretic-Uncensored-Thinking. All credit for the base model architecture and weights goes to the original authors. The base model's license applies to this derivative.
Quantization Method
Quantized using MagicQuant hybrid evolutionary per-tensor quantization, based on the methodology by magiccodingman:
- Tensors are classified into sensitivity groups (Embeddings, Head, Query, Key, Output, FFN Up/Down, MoE Experts, Router)
- An evolutionary search finds the optimal quantization type per group, balancing size vs. perplexity
- Q4/Q5/Q6 tier targets are produced with different size-quality tradeoffs
- Small-row tensors and sensitivity-critical layers (embeddings, output head, router) are kept at F32/F16/BF16
- This is NOT a uniform quantization -- each tensor group gets its own optimal type
Training Details
| Parameter | Value |
|---|---|
| Method | QLoRA with completion-only loss masking |
| LoRA rank (r) | 32 |
| LoRA alpha | 64 |
| LoRA dropout | 0.05 |
| Epochs | 3 |
| Learning rate | 0.0002 |
| LR scheduler | cosine |
| Batch size | 2 (effective 8 with gradient accumulation) |
| Optimizer | paged_adamw_8bit |
| Training sequence length | 4096 |
| Precision | BF16 |
| Dataset | data/zeroclaw_training_data.jsonl |
| Hardware | AMD Ryzen AI Max+ 395 (Strix Halo), 128 GB unified memory (GTT), ROCm |
Completion-only loss: Only assistant response turns contribute to the training loss. System and user turns are masked, so the model learns to generate responses rather than memorizing prompts.
GGUF Files
| File | Size | Quant |
|---|---|---|
| Qwen3.5-40B-Claude-4.6-Opus-Deckard-Heretic-Uncensored-Thinking-Q4_K_M.gguf | 28.7 GB | Q4 hybrid |
| Qwen3.5-40B-Claude-4.6-Opus-Deckard-Heretic-Uncensored-Thinking-Q5_K_M.gguf | 36.0 GB | Q5 hybrid |
| Qwen3.5-40B-Claude-4.6-Opus-Deckard-Heretic-Uncensored-Thinking-Q6_K.gguf | 71.8 GB | Q6 hybrid |
Other Files
| File | Size |
|---|---|
| lora/README.md | 0 MB |
| lora/adapter_config.json | 0 MB |
| lora/adapter_model.safetensors | 956 MB |
| lora/chat_template.jinja | 0 MB |
| lora/tokenizer.json | 20 MB |
| lora/tokenizer_config.json | 0 MB |
Usage
LM Studio
- Download the GGUF file of your preferred quantization tier
- Place it in your LM Studio models directory
- Load the model in LM Studio -- it will auto-detect the chat template
- The model supports the base model's full context length
llama.cpp
# Interactive chat
llama-cli -m Qwen3.5-40B-Claude-4.6-Opus-Deckard-Heretic-Uncensored-Thinking-Zeroclaw-GGUF-Q5.gguf -c 8192 --chat-template chatml -cnv
# Single prompt
llama-cli -m Qwen3.5-40B-Claude-4.6-Opus-Deckard-Heretic-Uncensored-Thinking-Zeroclaw-GGUF-Q5.gguf -c 8192 -p "Your prompt here"
# Server mode
llama-server -m Qwen3.5-40B-Claude-4.6-Opus-Deckard-Heretic-Uncensored-Thinking-Zeroclaw-GGUF-Q5.gguf -c 8192 --port 8080
Python (llama-cpp-python)
from llama_cpp import Llama
llm = Llama(model_path="./Qwen3.5-40B-Claude-4.6-Opus-Deckard-Heretic-Uncensored-Thinking-Zeroclaw-GGUF-Q5.gguf", n_ctx=8192)
output = llm.create_chat_completion(
messages=[
{"role": "user", "content": "Hello, how are you?"}
]
)
print(output["choices"][0]["message"]["content"])
Caveats
- The base model's license (apache-2.0) applies to all derivative files
- This is a personal fine-tune, not an official release from the base model authors
- Quality depends on the training data and may not generalize to all tasks
- Quantization reduces precision -- verify outputs for your specific use case
- The hybrid quantization assigns different precision to different tensor groups, which means quality characteristics may differ from uniform quantizations
Limitations
- Training data used sequences up to 4096 tokens; the model retains the base model's full context window
- Performance on tasks not represented in the training data may be degraded
- Quantized models may exhibit subtle differences from the full-precision fine-tune
- This model inherits any limitations and biases present in the base model
Generated with Foundry + MagicQuant
- Downloads last month
- 923
4-bit
5-bit
Model tree for lmcoleman/Qwen3.5-40B-Claude-4.6-Opus-Deckard-Heretic-Uncensored-Thinking-Zeroclaw-GGUF
Base model
Qwen/Qwen3.5-27B