MiniMax-M2.7 optimized for MLX. A mixed-precision quant that balances speed, memory, and accuracy.

Usage

# Start server at http://localhost:8080/chat/completions
uvx --from mlx-lm mlx_lm.server \
  --host 127.0.0.1 \
  --port 8080 \
  --model spicyneuron/MiniMax-M2.7-MLX-4.6bit

Methodology

Quantized with a mlx-lm fork, drawing inspiration from Unsloth/AesSedai/ubergarm style mixed-precision GGUFs. MLX quantization options differ than llama.cpp, but the principles are the same:

  • Sensitive layers like MoE routing, attention, and output embeddings get higher precision
  • More tolerant layers like MoE experts get lower precision

Benchmarks

metric mlx-community_MiniMax-M2.7-4bit baa-ai_MiniMax-M2.7-RAM-155GB-MLX 4.6 bit (this model)
bpw 4.501 5.4278 4.5987
peak memory (1024/512) 129.632 156.051 132.442
prompt tok/s (1024) 739.996 ± 1.565 708.147 ± 0.818 740.409 ± 0.268
gen tok/s (512) 48.703 ± 0.116 40.253 ± 0.077 48.038 ± 0.099
perplexity 9.120 ± 0.047 8.835 ± 0.045 4.462 ± 0.019
hellaswag 0.504 ± 0.011 0.509 ± 0.011 0.505 ± 0.011
piqa 0.786 ± 0.01 0.787 ± 0.01 0.793 ± 0.009
winogrande 0.636 ± 0.014 0.661 ± 0.013 0.645 ± 0.013

Tested on a Mac Studio M3 Ultra with:

mlx_lm.perplexity --sequence-length 2048 --seed 123
mlx_lm.benchmark --prompt-tokens 1024 --generation-tokens 512 --num-trials 5
mlx_lm.evaluate --tasks hellaswag --seed 123 --num-shots 0 --limit 2000
mlx_lm.evaluate --tasks piqa --seed 123 --num-shots 0 --limit 2000
mlx_lm.evaluate --tasks winogrande --seed 123 --num-shots 0 --limit 2000
Downloads last month
-
Safetensors
Model size
229B params
Tensor type
BF16
·
U32
·
F32
·
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for spicyneuron/MiniMax-M2.7-MLX-4.6bit

Quantized
(39)
this model