Qwen3.5-35B-A3B-Opus-Reasoning-Distilled-Uncensored-GGUF
Model Description
This repository contains GGUF format conversions of ponytang3/Qwen3.5-35B-A3B-Opus-Reasoning-Distilled-Uncensored.
Available Files
| File | Quantization | Description |
|---|---|---|
Qwen3.5-35B-A3B-Opus-Reasoning-Distilled-Uncensored-bf16.gguf |
BF16 | Full precision, suitable for further quantization |
Qwen3.5-35B-A3B-Opus-Reasoning-Distilled-Uncensored-Q4_K_M.gguf |
Q4_K_M | 4-bit quantized, good quality/size balance |
Usage with llama.cpp
# Full precision
./llama-cli -m Qwen3.5-35B-A3B-Opus-Reasoning-Distilled-Uncensored-bf16.gguf -p "Your prompt here" -n 512
# Quantized (recommended for most users)
./llama-cli -m Qwen3.5-35B-A3B-Opus-Reasoning-Distilled-Uncensored-Q4_K_M.gguf -p "Your prompt here" -n 512
Usage with Ollama
Create a Modelfile:
FROM ./Qwen3.5-35B-A3B-Opus-Reasoning-Distilled-Uncensored-Q4_K_M.gguf
Then run:
ollama create my-model -f Modelfile
ollama run my-model
Quantization Details
- Tool: llama.cpp (convert_hf_to_gguf.py + llama-quantize)
- Pipeline: HF safetensors -> bf16 GGUF -> Q4_K_M GGUF
- Downloads last month
- 990
Hardware compatibility
Log In to add your hardware
4-bit
16-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for ponytang3/Qwen3.5-35B-A3B-Opus-Reasoning-Distilled-GGUF
Base model
Qwen/Qwen3.5-35B-A3B-Base Finetuned
Qwen/Qwen3.5-35B-A3B