Gemma-4-31B Heretic

Quality: quantized (8 bit, group size: 32, 8.802 bpw)

This is a abliterated (uncensored) version of google/gemma-4-31B-it, made using Heretic v1.2.0 with the Arbitrary-Rank Ablation (ARA) method (with row-norm preservation)

Performance

Metric This model Original model (google/gemma-4-31B-it)
KL divergence 0.0434 0 (by definition)
Refusals 15/100 99/100

Abliteration parameters

Parameter Value
start_layer_index 1
end_layer_index 59
preserve_good_behavior_weight 0.8438
steer_bad_behavior_weight 0.0002
overcorrect_relative_weight 1.0760
neighbor_count 15

Source

This model was converted to MLX format from coder3101/gemma-4-31B-it-heretic using mlx-vlm version 0.4.4.

Downloads last month
624
Safetensors
Model size
33B params
Tensor type
BF16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for TheCluster/Gemma-4-31B-Heretic-MLX-8bit

Quantized
(6)
this model

Collection including TheCluster/Gemma-4-31B-Heretic-MLX-8bit