File size: 1,924 Bytes
94d83a1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
---
license: other
base_model: stepfun-ai/Step-3.5-Flash
tags:
  - gguf
  - quantized
  - apex
  - moe
  - mixture-of-experts
  - step
---

# Step-3.5-Flash APEX GGUF

**APEX (Adaptive Precision for EXpert Models)** quantizations of [Step-3.5-Flash](https://huggingface.co/stepfun-ai/Step-3.5-Flash).

**Brought to you by the [LocalAI](https://github.com/mudler/LocalAI) team** | [APEX Project](https://github.com/mudler/apex-quant) | [Technical Report](https://github.com/mudler/apex-quant/blob/main/paper/APEX_Technical_Report.pdf)

## Benchmark Results

Benchmarks coming soon. For reference APEX benchmarks on the Qwen3.5-35B-A3B architecture, see [mudler/Qwen3.5-35B-A3B-APEX-GGUF](https://huggingface.co/mudler/Qwen3.5-35B-A3B-APEX-GGUF).

## What is APEX?

APEX is a quantization strategy for Mixture-of-Experts (MoE) models. It classifies tensors by role (routed expert, shared expert, attention) and applies a layer-wise precision gradient -- edge layers get higher precision, middle layers get more aggressive compression. I-variants use diverse imatrix calibration (chat, code, reasoning, tool-calling, agentic traces, Wikipedia).

See the [APEX project](https://github.com/mudler/apex-quant) for full details, technical report, and scripts.

## Architecture

- **Model**: Step-3.5-Flash (Step3p5)
- **Layers**: 45 (3 dense + 42 MoE)
- **Experts**: 288 routed + 1 shared (top-8 active per token)
- **Total Parameters**: ~196B
- **Active Parameters**: ~11B per token
- **Context**: 256K tokens (MTP 4-token speculative head)
- **APEX Config**: 5+5 symmetric edge gradient across 45 layers

## Run with LocalAI

```bash
local-ai run mudler/Step-3.5-Flash-APEX-GGUF@Step-3.5-Flash-APEX-I-Balanced.gguf
```

## Credits

APEX is brought to you by the [LocalAI](https://github.com/mudler/LocalAI) team. Developed through human-driven, AI-assisted research. Built on [llama.cpp](https://github.com/ggerganov/llama.cpp).