Qwen3.5-27B — UD-Q6_K_XL (mlx-node)

6-bit base mixed-precision quantization of Qwen/Qwen3.5-27B for Apple Silicon via mlx-node.

Original (BF16) This Model
Size ~51 GB 27 GB
Precision BF16 uniform Mixed 6/8/8/8/8-bit + BF16

All Variants

Per-Tensor Bit Assignments (N=6)

Weight Bits
embed_tokens 8-bit
lm_head 8-bit
self_attn.q/k/v_proj 8-bit + AWQ
linear_attn.in_proj_qkv/z 8-bit + AWQ
self_attn.o_proj bf16
linear_attn.out_proj bf16
down_proj 8-bit
gate_proj, up_proj 6-bit
Based on Unsloth Dynamic 2.0. Apache 2.0.
Downloads last month
368
Safetensors
Model size
9B params
Tensor type
BF16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

6-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Brooooooklyn/Qwen3.5-27B-UD-Q6_K_XL-mlx

Base model

Qwen/Qwen3.5-27B
Quantized
(164)
this model

Collection including Brooooooklyn/Qwen3.5-27B-UD-Q6_K_XL-mlx