Qwen3.6-27B-GGUF

Qwen3.6-27B from Alibaba's Qwen team is the latest dense multimodal language model in the Qwen3.6 series, featuring 27 billion parameters with a hybrid Gated DeltaNet architecture (64 layers, hidden dim 5120, 248K vocabulary across 201+ languages) and an integrated vision encoder. Specifically optimized for agentic coding, repository-level reasoning, and stable real-world utility, it supports multi-token prediction and a native context length of 262,144 tokens—extensible up to 1M+ tokens via YaRN. The model introduces "Thinking Preservation" to maintain reasoning context across iterative developments and delivers flagship-level performance across various benchmarks. It notably achieves 77.2 on SWE-bench Verified, 83.9 on LiveCodeBench v6, and 94.1 on AIME26 for language and coding tasks, while its multimodal strengths are evidenced by high scores in vision-language evaluations, including 82.9 on MMMU, 87.4 on MathVista mini, and 87.7 on VideoMME, making it a highly capable tool for both complex programming and sophisticated visual understanding.

Model Files

File Name Quant Type File Size File Link
Qwen3.6-27B.BF16.gguf BF16 53.8 GB Download
Qwen3.6-27B.F16.gguf F16 53.8 GB Download
Qwen3.6-27B.F32.gguf F32 108 GB Download
Qwen3.6-27B.Q8_0.gguf Q8_0 28.6 GB Download
Qwen3.6-27B.mmproj-bf16.gguf mmproj-bf16 931 MB Download
Qwen3.6-27B.mmproj-f16.gguf mmproj-f16 931 MB Download
Qwen3.6-27B.mmproj-f32.gguf mmproj-f32 1.84 GB Download
Qwen3.6-27B.mmproj-q8_0.gguf mmproj-q8_0 629 MB Download

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
5,680
GGUF
Model size
27B params
Architecture
qwen35
Hardware compatibility
Log In to add your hardware

8-bit

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Qwen3.6-27B-GGUF

Base model

Qwen/Qwen3.6-27B
Quantized
(280)
this model

Collection including prithivMLmods/Qwen3.6-27B-GGUF