--- library_name: transformers license: other license_name: nvidia-nemotron-open-model-license license_link: >- https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-nemotron-open-model-license/ pipeline_tag: text-generation language: - en tags: - nvidia - pytorch base_model: nvidia/NVIDIA-Nemotron-Nano-9B-v2 datasets: - nvidia/Nemotron-CC-v2 - nvidia/Nemotron-Post-Training-Dataset-v2 - nvidia/Nemotron-Science-v1 - nvidia/Nemotron-Instruction-Following-Chat-v1 - nvidia/Nemotron-Agentic-v1 - nvidia/Nemotron-Competitive-Programming-v1 - nvidia/Nemotron-Math-Proofs-v1 - nvidia/Nemotron-RL-Agentic-Conversational-Tool-Use-Pivot-v1 - nvidia/Nemotron-RL-instruction_following - nvidia/Nemotron-RL-agent-calendar_scheduling - nvidia/Nemotron-RL-instruction_following-structured_outputs track_downloads: true ---
Pre-Training Datasets Post-Training Datasets
Homepage Discord
License
# NVIDIA-Nemotron-3-Nano-4B-BF16 **Model Developer:** NVIDIA Corporation **Model Dates:** Dec 2025 \- Jan 2026 **Data Freshness:** September 2024 The pretraining data has a cutoff date of September 2024\. ## Model Overview NVIDIA-Nemotron-3-Nano-4B-BF16 is a small language model (SLM) trained from scratch by NVIDIA, and designed as a unified model for both reasoning and non-reasoning tasks. It responds to user queries and tasks by first generating a reasoning trace and then concluding with a final response. The model's reasoning capabilities can be controlled via a system prompt. If the user prefers the model to provide its final answer without intermediate reasoning traces, it can be configured to do so, albeit with a slight decrease in accuracy for harder prompts that require reasoning. Conversely, allowing the model to generate reasoning traces first generally results in higher-quality final solutions to queries and tasks. The model has been compressed from NVIDIA-Nemotron-Nano-9B-v2 using the Nemotron [Elastic](https://arxiv.org/pdf/2511.16664) framework. The details of the parent model NVIDIA-Nemotron-Nano-9B-v2 can be found in ([Nemotron-H tech report](https://arxiv.org/abs/2504.03624)). The model uses a hybrid architecture consisting primarily of Mamba-2 and MLP layers combined with just four Attention layers. The supported languages include: English. Improved using Qwen. This model is ready for commercial use. ## License/Terms of Use Governing Terms: Use of this model is governed by the [NVIDIA Nemotron Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-nemotron-open-model-license/). ### Evaluation Results: We evaluated our model in \*\*Reasoning-off\*\* mode across these benchmarks | Benchmark | NVIDIA-Nemotron-3-Nano-4B-BF16 | | :---- | ----- | | BFCL v3 | 61.1 | | IFBench-Prompt | 43.2 | | IFBench-Instruction | 44.2 | | Orak | 22.9 | | IFEval-Prompt | 82.8 | | IFEval-Instruction | 88 | | HaluEval | 62.2 | | RULER (128k) | 91.1 | | Tau2-Airline | 28.0 | | Tau2-Retail | 34.8 | | Tau2-Telecom | 24.9 | | EQ-Bench3 | 63.2 | We also evaluated our model in \*\*Reasoning-On\*\* mode across these benchmarks. | Benchmark | NVIDIA-Nemotron-3-Nano-4B-BF16 | | :---- | :---: | | AIME25 | 78.5 | | MATH500 | 95.4 | | GPQA | 53.2 | | LCB | 51.8 | | BFCL v3 | 61.1 | | IFEVAL-Prompt | 87.9 | | IFEVAL-Instruction | 92 | | Tau2-Airline | 33.3 | | Tau2-Retail | 39.8 | | Tau2-Telecom | 33 | All evaluations were done using [NeMo-Skills](https://github.com/NVIDIA/NeMo-Skills/tree/main/docs) & [Orak](https://github.com/krafton-ai/Orak). For Orak we evaluated on three games (Super Mario, Darkest Dungeon & StarDew Valley) ### Deployment Geography: Global ### Use Case NVIDIA-Nemotron-3-Nano-4B is an edge-ready small language model intended for Agentic AI in edge platforms (Jetson Thor, GeForce RTX, DGX Spark). It targets key-uses including AI gaming NPCs (teammates / companions), local voice assistants (for devices, apps, and games), and IoT automation. It is to be used in English and coding languages. ### Release Date: 3/16/2026 Huggingface 3/16/2026 via [https://huggingface.co/](https://huggingface.co/) ## References - [NVIDIA Nemotron Nano 2: An Accurate and Efficient Hybrid Mamba-Transformer Reasoning Model](https://research.nvidia.com/labs/adlr/files/NVIDIA-Nemotron-Nano-2-Technical-Report.pdf) - [Nemotron Elastic: Towards Efficient Many-in-One Reasoning LLMs](https://arxiv.org/abs/2511.16664) - [NVIDIA Nemotron 3: Efficient and Open Intelligence](https://arxiv.org/abs/2512.20856) - [Nemotron 3 Nano: Open, Efficient Mixture-of-Experts Hybrid Mamba-Transformer Model for Agentic Reasoning](https://arxiv.org/abs/2512.20848) - [Nemotron 3 Super: Open, Efficient Mixture-of-Experts Hybrid Mamba-Transformer Model for Agentic Reasoning](https://research.nvidia.com/labs/nemotron/files/NVIDIA-Nemotron-3-Super-Technical-Report.pdf) ## Model Architecture - Architecture Type: Mamba2-Transformer Hybrid - Network Architecture: Nemotron-Hybrid - This model was compressed from [nvidia/NVIDIA-Nemotron-Nano-9B-v2](https://huggingface.co/nvidia/NVIDIA-Nemotron-Nano-9B-v2) - Number of model parameters 3.97 x 10^9 ## Input - Input Type(s): Text - Input Format(s): String - Input Parameters: One-Dimensional (1D): Sequences - Other Properties Related to Input: Context length up to 262K. Supported languages include English. ## Output - Output Type(s): Text - Output Format: String - Output Parameters: One-Dimensional (1D): Sequences - Other properties Related to Output: Sequences up to 262K Our models are designed and optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions. ## Software Integration - Runtime Engine(s): NeMo 25.07 - Supported Hardware Microarchitecture Compatibility: NVIDIA A10G, NVIDIA H100-80GB, NVIDIA A100, GeForce RTX - Operating System(s): Linux The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment. ### **Use it with Transformers** The snippet below shows how to use this model with Huggingface Transformers (tested on version 4.48.3). ``` import torch from transformers import AutoTokenizer, AutoModelForCausalLM # Load tokenizer and model tokenizer = AutoTokenizer.from_pretrained("nvidia/NVIDIA-Nemotron-3-Nano-4B") model = AutoModelForCausalLM.from_pretrained( "nvidia/NVIDIA-Nemotron-3-Nano-4B", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto" ) ``` ``` messages = [ {"role": "system", "content": }, {"role": "user", "content": "Write a haiku about GPUs"}, ] tokenized_chat = tokenizer.apply_chat_template( messages, tokenize=True, add_generation_prompt=True, return_tensors="pt" ).to(model.device) outputs = model.generate( tokenized_chat, max_new_tokens=32, eos_token_id=tokenizer.eos_token_id ) print(tokenizer.decode(outputs[0])) ``` temperature=1.0 and top\_p=0.95 are recommended for reasoning tasks, while temperature=0.6 and top\_p=0.95 are recommended for tool calling. If you’d like to use reasoning off, add enable\_thinking=False to apply\_chat\_template(). By default, enable\_thinking is set to be True. ``` messages = [ {"role": "system", "content": }, {"role": "user", "content": "Write a haiku about GPUs"}, ] tokenized_chat = tokenizer.apply_chat_template( messages, tokenize=True, enable_thinking=False, add_generation_prompt=True, return_tensors="pt" ).to(model.device) outputs = model.generate( tokenized_chat, max_new_tokens=32, eos_token_id=tokenizer.eos_token_id ) print(tokenizer.decode(outputs[0])) ``` ### **Use it with vLLM** We need vllm\>=0.15.1 for this model. If you are on Jetson Thor or DGX Spark, please use [this vllm container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/vllm?version=26.02-py3). ``` pip install -U "vllm>=0.15.1" ``` Download the custom parser from the Hugging Face repository. ``` wget https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-4B-BF16/resolve/main/nano_v3_reasoning_parser.py ``` Launch a vLLM server using the custom parser. ``` vllm serve nvidia/NVIDIA-Nemotron-3-Nano-4B-BF16 \ --served-model-name nemotron3-nano-4B-BF16\ --max-num-seqs 8 \ --tensor-parallel-size 1 \ --max-model-len 262144 \ --port 8000 \ --trust-remote-code \ --mamba_ssm_cache_dtype float32 \ --enable-auto-tool-choice \ --tool-call-parser qwen3_coder \ --reasoning-parser-plugin nano_v3_reasoning_parser.py \ --reasoning-parser nano_v3 ``` Access the hosted API using a python client. ```py from openai import OpenAI import asyncio from openai import AsyncOpenAI # NOTE: Streaming is preferred for better performance and resource efficiency. # It allows you to start processing responses as they arrive, reducing latency. # Synchronous example (non-streaming) client = OpenAI( api_key="your-nvapikey", base_url="base-url" ) response = client.chat.completions.create( model="nemotron3-nano-4B-BF16", messages=[ { "role": "user", "content": "Hello!" } ], temperature=0.7, max_tokens=256, top_p=0.7, stream=false ) print(response.choices[0].message.content) ``` ### Use it with TRT-LLM Launch the model using TRT-LLM ```shell docker run -v /home/root/.cache/huggingface/:/root/.cache/huggingface/ --rm --ulimit memlock=-1 --ulimit stack=67108864 --gpus=all --ipc=host --network host -d -e MODEL=nvidia/NVIDIA-Nemotron-3-Nano-4B-BF16 -e HF_TOKEN=$HF_TOKEN nvcr.io/nvidia/tensorrt-llm/release:1.3.0rc6 bash -c ' cat > /tmp/extra-llm-api-config.yml <