Qwen3-Coder-Next APEX GGUF
APEX (Adaptive Precision for EXpert Models) quantizations of Qwen3-Coder-Next.
Brought to you by the LocalAI team | APEX Project | Technical Report
Benchmark Results
Benchmarks coming soon. For reference APEX benchmarks on the Qwen3.5-35B-A3B architecture, see mudler/Qwen3.5-35B-A3B-APEX-GGUF.
What is APEX?
APEX is a quantization strategy for Mixture-of-Experts (MoE) models. It classifies tensors by role (routed expert, shared expert, attention) and applies a layer-wise precision gradient -- edge layers get higher precision, middle layers get more aggressive compression. I-variants use diverse imatrix calibration (chat, code, reasoning, tool-calling, agentic traces, Wikipedia).
See the APEX project for full details, technical report, and scripts.
Architecture
- Model: Qwen3-Coder-Next (Qwen3Next)
- Layers: 48
- Experts: 512 routed + shared (10 active per token)
- Total Parameters: ~80B
- Active Parameters: ~8-10B per token
- Attention: Hybrid (standard every 4th layer + linear attention)
- Context: 262K tokens
- APEX Config: 5+5 symmetric edge gradient across 48 layers
- Calibration: v1.3 diverse dataset (chat, code, reasoning, multilingual, tool-calling, Wikipedia)
Run with LocalAI
local-ai run mudler/Qwen3-Coder-Next-APEX-GGUF@Qwen3-Coder-Next-APEX-I-Balanced.gguf
Credits
APEX is brought to you by the LocalAI team. Developed through human-driven, AI-assisted research. Built on llama.cpp.
- Downloads last month
- 2,347
We're not able to determine the quantization variants.
Model tree for mudler/Qwen3-Coder-Next-APEX-GGUF
Base model
Qwen/Qwen3-Coder-Next