Qwen3.5-text-9B-GGUF
GGUF quants of techwithsergiu/Qwen3.5-text-9B — the text-only bf16 derivative of Qwen/Qwen3.5-9B.
The visual tower has been removed before conversion. All text-backbone weights are identical to the original — no retraining, no weight changes, no quality loss for text tasks.
Quants
| File | Type | Size | Notes |
|---|---|---|---|
Qwen3.5-text-9B-Q8_0.gguf |
Q8_0 | ~53% of f16 | near-lossless — for high-quality inference |
Qwen3.5-text-9B-Q6_K.gguf |
Q6_K | ~41% of f16 | excellent quality, good balance with f16 |
Qwen3.5-text-9B-Q5_K_M.gguf |
Q5_K_M | ~37% of f16 | very good quality, smaller than Q6 |
Qwen3.5-text-9B-Q4_K_M.gguf |
Q4_K_M | ~31% of f16 | ✅ recommended — best size/quality balance |
Qwen3.5-text-9B-Q4_K_S.gguf |
Q4_K_S | ~30% of f16 | optional — slightly smaller, slightly lower quality |
Model family
| Model | Type | Base model |
|---|---|---|
| Qwen/Qwen3.5-9B | f16 · VLM · source | — |
| techwithsergiu/Qwen3.5-9B-bnb-4bit | BNB NF4 · VLM | Qwen/Qwen3.5-9B |
| techwithsergiu/Qwen3.5-text-9B | bf16 · text-only | Qwen/Qwen3.5-9B |
| techwithsergiu/Qwen3.5-text-9B-bnb-4bit | BNB NF4 · text-only | Qwen3.5-text-9B |
| techwithsergiu/Qwen3.5-text-9B-GGUF | GGUF quants | Qwen3.5-text-9B |
The GGUF repo is derived from the text-only f16 model — same weights, different container
format. base_model points to the f16 text variant to keep the VLM and text lineages
distinct on the Hub.
Inference
llama.cpp
./llama.cpp/build/bin/llama-cli \
-m Qwen3.5-text-9B-Q4_K_M.gguf \
-p "What is the capital of Romania?" \
-n 256
LM Studio
Load any .gguf file from this repo directly in LM Studio.
Recommended quant: Q4_K_M.
Thinking mode
Qwen3.5 supports an optional chain-of-thought <think> block before the answer.
Thinking is enabled by default in llama.cpp.
Note: --chat-template-kwargs '{"enable_thinking":...}' is deprecated — do not use.
Known issue: --reasoning off is accepted but does not actually disable thinking.
Workaround: use --reasoning-budget 0 — this reliably disables the <think> block.
Track the bug at llama.cpp issues.
# Thinking OFF — direct answer (workaround: --reasoning-budget 0)
./llama.cpp/build/bin/llama-cli \
-m Qwen3.5-text-9B-Q4_K_M.gguf \
--reasoning-budget 0 \
-p "What is the capital of Romania?" \
-n 256
# Thinking ON — default, no flag needed
./llama.cpp/build/bin/llama-cli \
-m Qwen3.5-text-9B-Q4_K_M.gguf \
-p "What is 17 × 34?" \
-n 1024
Pipeline diagram
From fine-tuned adapter to GGUF
If you have a LoRA adapter trained with qwen-qlora-train, merge it first, then convert to GGUF:
# 1. Merge adapter into f16 weights
qlora-merge \
--base Qwen/Qwen3.5-9B \
--adapter adapters/<run_name> \
--output merged/qwen35-text-9B-sft-f16
# 2. Convert merged model to GGUF (requires llama.cpp)
python llama.cpp/convert_hf_to_gguf.py merged/qwen35-text-9B-sft-f16 \
--outtype f16 \
--outfile merged/qwen35-text-9B-sft-F16.gguf
# 3. Quantize
./llama.cpp/build/bin/llama-quantize \
merged/qwen35-text-9B-sft-F16.gguf \
merged/qwen35-text-9B-sft-Q4_K_M.gguf \
Q4_K_M
Full post-training workflow is documented in qwen-qlora-train → Post-merge workflow.
Conversion
Converted using qwen35-toolkit — a Python toolkit for BNB quantization, visual tower removal, verification and HF Hub publishing of Qwen3.5 models.
Acknowledgements
Based on Qwen/Qwen3.5-9B by the Qwen Team. If you use this model in research, please cite the original:
@misc{qwen3.5,
title = {{Qwen3.5}: Towards Native Multimodal Agents},
author = {{Qwen Team}},
month = {February},
year = {2026},
url = {https://qwen.ai/blog?id=qwen3.5}
}
- Downloads last month
- 4,637
4-bit
5-bit
6-bit
8-bit
16-bit

