Official quantizations?
So far I had very little success with any of the available quantizations done by the community. I don't know if it was just a bad choice of what layers to compress, or if there is something else wrong with this.
So I'm asking, are there any quanitzaitons planned? Total model size down to 30-35 GB would be fine to me.
Best regards.
@wijjjj , please try out these:
Used the same selective quantization recipe from the Nemotron 3 Nano Technical Report (Section 4).
Benchmarks
Calculated using NVIDIA-NeMo/Evaluator with config from Nemotron-3-Super-120B's eval config. Inference via vLLM with --mamba_ssm_cache_dtype float32 (see this discussion for more details).
| Benchmark | BF16 (reproduced) | FP8 | NVFP4 |
|---|---|---|---|
| AIME 2025 (avg@8) | 98.8 | 96.7 | 97.9 |
| AIME 2026 (avg@8) | 94.2 | 95.0 | 92.1 |
| HMMT Feb 2025 (avg@8) | 92.9 | 93.8 | 90.1 |
With 8 rollouts per problem, Β±2% deviation across runs is expected. FP8 is equivalent to BF16. NVFP4 is consistently 1-2% below BF16.
If anyone wants to try my quant of this one, I made an NVFP4 version for llama.cpp https://huggingface.co/michaelw9999/Nemotron-Cascade-2-30B-A3B-NVFP4-GGUF , did this using my own quantizer, I did not benchmark it on anything yet - feedback welcome!
check out this setup from Sudo su:
"i pointed hermes agent at nvidia's nemotron cascade 2 30B-A3B on a single RTX 3090 24GB. IQ4_XS quant by bartowski, 187 tok/s, 625K context. had it discover its own hardware, create an identity file, then build a full GPU marketplace UI from a single prompt."
@chankhavu , I can't get it running. Unfortunately FP8 GEMM is broken for Blackwell architecture. :( but thanks anyways sharing.
@wijjjj I tested it on RTX Pro 6000 (Blackwell), using vLLM. Here is the full command I used:
vllm serve chankhavu/Nemotron-Cascade-2-30B-A3B-FP8 \
--max-model-len 262144 \
--trust-remote-code \
--mamba_ssm_cache_dtype float32 \
--no-enable-prefix-caching \
--enable-auto-tool-choice \
--tool-call-parser qwen3_coder
The NVFP4 model needs additional flags:
export VLLM_USE_FLASHINFER_MOE_FP4=1
export VLLM_FLASHINFER_MOE_BACKEND=throughput
vllm serve chankhavu/Nemotron-Cascade-2-30B-A3B-NVFP4 \
--max-model-len 262144 \
--trust-remote-code \
--mamba_ssm_cache_dtype float32 \
--no-enable-prefix-caching \
--enable-auto-tool-choice \
--tool-call-parser qwen3_coder \
--kv-cache-dtype fp8
SGLang doesn't really work and I don't know why