Step-3.5-Flash APEX GGUF
APEX (Adaptive Precision for EXpert Models) quantizations of Step-3.5-Flash.
Brought to you by the LocalAI team | APEX Project | Technical Report
Benchmark Results
Benchmarks coming soon. For reference APEX benchmarks on the Qwen3.5-35B-A3B architecture, see mudler/Qwen3.5-35B-A3B-APEX-GGUF.
What is APEX?
APEX is a quantization strategy for Mixture-of-Experts (MoE) models. It classifies tensors by role (routed expert, shared expert, attention) and applies a layer-wise precision gradient -- edge layers get higher precision, middle layers get more aggressive compression. I-variants use diverse imatrix calibration (chat, code, reasoning, tool-calling, agentic traces, Wikipedia).
See the APEX project for full details, technical report, and scripts.
Architecture
- Model: Step-3.5-Flash (Step3p5)
- Layers: 45 (3 dense + 42 MoE)
- Experts: 288 routed + 1 shared (top-8 active per token)
- Total Parameters: ~196B
- Active Parameters: ~11B per token
- Context: 256K tokens (MTP 4-token speculative head)
- APEX Config: 5+5 symmetric edge gradient across 45 layers
Run with LocalAI
local-ai run mudler/Step-3.5-Flash-APEX-GGUF@Step-3.5-Flash-APEX-I-Balanced.gguf
Credits
APEX is brought to you by the LocalAI team. Developed through human-driven, AI-assisted research. Built on llama.cpp.
- Downloads last month
- 5
We're not able to determine the quantization variants.
Model tree for mudler/Step-3.5-Flash-APEX-GGUF
Base model
stepfun-ai/Step-3.5-Flash