Qwen3.5-27B — UD-Q5_K_XL (mlx-node)

5-bit base mixed-precision quantization of Qwen/Qwen3.5-27B for Apple Silicon, using the Unsloth Dynamic quantization strategy via mlx-node.

Original (BF16) This Model
Size ~51 GB 24 GB
Format SafeTensors (sharded) SafeTensors (single file)
Precision BF16 uniform Mixed 5/6/8/8/8-bit + BF16

All Variants

Per-Tensor Bit Assignments (N=5)

Weight Bits Rationale
embed_tokens 8-bit KLD ~0.15 — very low sensitivity
lm_head 8-bit KLD ~0.05 — safest tensor
self_attn.q/k/v_proj 8-bit + AWQ KLD ~1.5-2.9, AWQ via layernorm
linear_attn.in_proj_qkv/z 8-bit + AWQ KLD ~2.9, AWQ via layernorm
self_attn.o_proj bf16 NOT AWQ-correctable
linear_attn.out_proj bf16 KLD ~6.0 — worst tensor
down_proj 6-bit "Slightly more sensitive"
gate_proj, up_proj 5-bit "Generally ok" at low bits

Based on Unsloth Dynamic 2.0 per-tensor KLD analysis with imatrix AWQ pre-scaling.

Usage

import {{ loadModel }} from '@mlx-node/lm';
const model = await loadModel('./Qwen3.5-27B-UD-Q5_K_XL-mlx');
const result = await model.chat(
  [{{ role: 'user', content: 'Hello!' }}],
  {{ maxNewTokens: 2048, temperature: 0.6, enableThinking: false }},
);
console.log(result.text);

How It Was Made

mlx convert -i Qwen3.5-27B -o Qwen3.5-27B-UD-Q5_K_XL-mlx -q --q-bits 5 --q-recipe unsloth --imatrix-path imatrix_unsloth.gguf

Acknowledgments

License

Apache 2.0 (inherited from base model).

Downloads last month
299
Safetensors
Model size
8B params
Tensor type
BF16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

5-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Brooooooklyn/Qwen3.5-27B-UD-Q5_K_XL-mlx

Base model

Qwen/Qwen3.5-27B
Quantized
(198)
this model

Collection including Brooooooklyn/Qwen3.5-27B-UD-Q5_K_XL-mlx