--- license: other base_model: nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16 tags: - gguf - quantized - apex - moe - mixture-of-experts - nvidia - nemotron - mamba - hybrid --- # Nemotron-3-Nano-30B-A3B APEX GGUF **APEX (Adaptive Precision for EXpert Models)** quantizations of [NVIDIA-Nemotron-3-Nano-30B-A3B](https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16). **Brought to you by the [LocalAI](https://github.com/mudler/LocalAI) team** | [APEX Project](https://github.com/mudler/apex-quant) | [Technical Report](https://github.com/mudler/apex-quant/blob/main/paper/APEX_Technical_Report.pdf) ## Benchmark Results Benchmarks coming soon. For reference APEX benchmarks on the Qwen3.5-35B-A3B architecture, see [mudler/Qwen3.5-35B-A3B-APEX-GGUF](https://huggingface.co/mudler/Qwen3.5-35B-A3B-APEX-GGUF). ## What is APEX? APEX is a quantization strategy for Mixture-of-Experts (MoE) models. It classifies tensors by role (routed expert, shared expert, attention) and applies a layer-wise precision gradient -- edge layers get higher precision, middle layers get more aggressive compression. I-variants use diverse imatrix calibration (chat, code, reasoning, tool-calling, agentic traces, Wikipedia). See the [APEX project](https://github.com/mudler/apex-quant) for full details, technical report, and scripts. ## Architecture - **Model**: NVIDIA-Nemotron-3-Nano-30B-A3B (NemotronH) - **Layers**: 52 (23 Mamba-2, 23 MoE, 6 GQA attention) - **Experts**: 128 routed + 1 shared (6 active per token) - **Total Parameters**: 30B - **Active Parameters**: ~3.5B per token - **APEX Config**: 5+5 symmetric edge gradient across 52 layers ## Run with LocalAI ```bash local-ai run mudler/Nemotron-3-Nano-30B-A3B-APEX-GGUF@Nemotron-3-Nano-30B-A3B-APEX-I-Balanced.gguf ``` ## Credits APEX is brought to you by the [LocalAI](https://github.com/mudler/LocalAI) team. Developed through human-driven, AI-assisted research. Built on [llama.cpp](https://github.com/ggerganov/llama.cpp).