(latest Llamacpp 7789 commit, with corrected quants.)

GLM-4.7-Flash-NEO-CODE-Imatrix-MAX-GGUF

Specialized and Enhanced GGUF quants for the new GLM-4.7-Flash, 30B-A3B MOE, mixture of experts model.

[ https://huggingface.co/zai-org/GLM-4.7-Flash ]

This model can be run on the GPU(s) and/or CPU due to 4 experts activated (appox 2B parameters active).

Default Settings (Most Tasks)

temperature: 1.0
top-p: 0.95
max new tokens: 131072

REP PEN: 1.1 OR 1.0 (off) (if you get repeat issues)

You might also try GLM 4.6 settings (unsloth):

temperature = 0.8

top_p = 0.6 (recommended)

top_k = 2 (recommended)

max_generate_tokens = 16,384

That being said, I suggest min context of 8k-16K as final outputs (post thinking) can be long and detailed and in a number of cases has been observed "polishing" the final output one or more times IN the output section.

(Model can handle 200k context, non-roped.)

Quants General:

Quants and Imatrixes computed using latest LLAMACPP (commit: 7789, Jan 21 2026) which contains specific fixes for this model.

Quants prior to this commit (as well as Imatrix generation) performed poorly (re-quanization and re-imatrix generation are required).

Also note there are some issues with Flash Attn and low token generation speed (as Flash is offloaded to CPU in some cases). Disable Flash Attn until this issue is resolved / makes its way thru the "llamacpp / ai pipeline".

UNCENSORED QUANTS:

https://huggingface.co/DavidAU/GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF

Specialized Quants

Specialized quants (IQ4_NL, Q5_1, Q4_1, Q8_0) are precision balanced to address a specific tensor issues in all layers that requires a specific quant type.

Other "normal" quants will also perform very well.

Quant Enhancements:

Imatrix is NEO and Code datasets by DavidAU - Dual Imatrix (2 imatrixes separately generated) to improve model performance.

All quants (specialized and "normal") are also enhanced with 16 bit (full) precision "output tensor" to further improve model performance.

Output tensor affects 10-20% of the fine output of the model - both thinking and output (final) generation.

[ more to come ]

Downloads last month
6,211
GGUF
Model size
30B params
Architecture
deepseek2
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for DavidAU/GLM-4.7-Flash-NEO-CODE-Imatrix-MAX-GGUF

Quantized
(44)
this model

Collections including DavidAU/GLM-4.7-Flash-NEO-CODE-Imatrix-MAX-GGUF