EN | 中文
SenseNova-SI: Scaling Spatial Intelligence with Multimodal Foundation Models
Overview
Despite remarkable progress, multimodal foundation models still exhibit surprising deficiencies in spatial intelligence. In this work, we explore scaling up multimodal foundation models to cultivate spatial intelligence within the SenseNova-SI family, built upon established multimodal foundations including visual understanding models (i.e., Qwen3-VL and InternVL3) and unified understanding and generation models (i.e., Bagel). We take a principled approach to constructing high-performing and robust spatial intelligence by systematically curating SenseNova-SI-8M: eight million diverse data samples under a rigorous taxonomy of spatial capabilities. SenseNova-SI demonstrates unprecedented performance across a broad range of spatial intelligence benchmarks, while maintaining strong general multimodal understanding. More importantly, we analyze the impact of data scaling, discuss early signs of emergent generalization capabilities enabled by diverse data training, analyze the risk of overfitting and language shortcuts, present a preliminary study on spatial chain-of-thought reasoning, and validate the potential downstream application. SenseNova-SI is an ongoing project, and this report will be updated continuously. All newly trained multimodal foundation models are publicly released to facilitate further research in this direction. In the future, SenseNova-SI will be integrated with larger-scale in-house models.
Models Zoo
| Model | Base Architecture | SI Dataset Scale | EASI-8 | Other Remarks |
|---|---|---|---|---|
| SenseNova-SI-1.5-InternVL3-8B | SenseNova-SI-1.4-InternVL3-8B | 1.5M | 64.4 | Enhanced capability in solid geometry |
| SenseNova-SI-1.4-InternVL3-8B | InternVL3 | 29M | 63.7 | Enhanced capability in grounding and depth estimation |
| SenseNova-SI-1.3-InternVL3-8B | InternVL3 | 14M | 65.2 | Best in spatial intelligence, with enhanced capabilities for open-ended short QA |
| SenseNova-SI-1.2-InternVL3-8B | InternVL3 | 10M | 64.5 | - |
| SenseNova-SI-1.1-InternVL3-8B | InternVL3 | 8M | 61.5 | - |
| SenseNova-SI-1.1-InternVL3-2B | InternVL3 | 8M | 49.4 | - |
| SenseNova-SI-1.1-Qwen3-VL-8B | Qwen3-VL | 8M | 58.1 | - |
| SenseNova-SI-1.1-Qwen2.5-VL-7B | Qwen2.5-VL | 8M | 51.0 | - |
| SenseNova-SI-1.1-Qwen2.5-VL-3B | Qwen2.5-VL | 8M | 45.7 | - |
| SenseNova-SI-1.1-BAGEL-7B-MoT | BAGEL | 8M | 48.6 | Unified understanding and generation model |
Release Information
Currently, we build SenseNova-SI upon popular open-source foundation models to maximize compatibility with existing research pipelines. In this release, we present SenseNova-SI-1.5-InternVL3-8B, SenseNova-SI-1.4-InternVL3-8B, SenseNova-SI-1.3-InternVL3-8B, SenseNova-SI-1.2-InternVL3-8B, SenseNova-SI-1.1-Qwen2.5-VL-3B, SenseNova-SI-1.1-Qwen2.5-VL-7B, and SenseNova-SI-1.1-Qwen3-VL-8B. SenseNova-SI-1.5-InternVL3-8B demonstrates strong spatial intelligence across a wide range of benchmarks, with notable improvements in analyzing and solving solid geometric problems, achieving an accuracy of 63.5 on SolidGeo MCQ, 72.7 on SolidMath and 68.9 on Math3D.
| Model | VSI | MMSI | MindCube-Tiny | ViewSpatial | SITE | BLINK | 3DSRBench | EmbSpatial-Bench |
|---|---|---|---|---|---|---|---|---|
| Open-source Models (~2B) | ||||||||
| InternVL3-2B | 32.9 | 26.5 | 37.5 | 32.5 | 30.0 | 50.8 | 47.7 | 60.1 |
| Qwen3-VL-2B-Instruct | 50.3 | 28.9 | 34.5 | 36.9 | 35.6 | 53.2 | 47.5 | 70.1 |
| MindCube-3B-RawQA-SFT | 17.2 | 1.7 | 51.7 | 24.1 | 6.3 | 35.1 | 2.8 | 37.0 |
| SpatialLadder-3B | 44.8 | 27.4 | 43.4 | 39.8 | 27.9 | 43.0 | 42.8 | 58.2 |
| SpatialMLLM-4B | 46.3 | 26.1 | 33.4 | 34.6 | 18.0 | 40.5 | 36.2 | 50.0 |
| VST-3B-SFT | 57.9 | 30.2 | 35.9 | 52.8 | 35.8 | 58.8 | 54.1 | 69.0 |
| Cambrian-S-3B | 57.3 | 25.2 | 32.5 | 39.0 | 28.3 | 37.7 | 50.9 | 63.5 |
| Open-source Models (~8B) | ||||||||
| InternVL3-8B | 42.1 | 28.0 | 41.5 | 38.6 | 41.1 | 53.5 | 44.3 | 76.4 |
| Qwen3-VL-8B-Instruct | 57.9 | 31.1 | 29.4 | 42.2 | 45.8 | 66.7 | 53.9 | 77.7 |
| BAGEL-7B-MoT | 31.4 | 31.0 | 34.7 | 41.3 | 37.0 | 63.7 | 50.2 | 73.1 |
| SpaceR-7B | 41.5 | 27.4 | 37.9 | 35.8 | 34.2 | 49.6 | 40.5 | 66.9 |
| ViLaSR-7B | 44.6 | 30.2 | 35.1 | 35.7 | 38.7 | 51.4 | 46.6 | 67.3 |
| VST-7B-SFT | 60.6 | 32.0 | 39.7 | 50.5 | 39.6 | 61.9 | 54.6 | 73.7 |
| Cambrian-S-7B | 67.5 | 25.8 | 39.6 | 40.9 | 33.0 | 37.9 | 54.8 | 72.8 |
| SenseNova-SI-1.5-InternVL3-8B | 67.3 | 38.3 | 92.1 | 59.0 | 47.5 | 69.5 | 61.3 | 80.3 |
| Proprietary Models | ||||||||
| Gemini-2.5-pro-2025-06 | 53.5 | 38.0 | 57.6 | 46.0 | 57.0 | 73.5 | 59.3 | 78.9 |
| Grok-4-2025-07-09 | 47.9 | 37.8 | 63.5 | 43.2 | 47.0 | 56.4 | 54.9 | 75.7 |
| GPT-5-2025-08-07 | 55.0 | 41.8 | 56.3 | 45.5 | 61.8 | 68.0 | 60.3 | 81.6 |
For solid geometry benchmarks, we report the following results:
| Model | SolidGeo MCQ | SpatialViz-Bench | SolidMath | Math3D |
|---|---|---|---|---|
| InternVL3-8B | 36.4 | 32.0 | 42.5 | 43.7 |
| SenseNova-SI-1.3-InternVL3-8B | 36.5 | 29.6 | 39.6 | 40.3 |
| SenseNova-SI-1.5-InternVL3-8B | 63.5 | 33.0 | 72.7 | 68.9 |
The SolidMath and Math3D are internal benchmarks constructed from K12 question banks, containing multiple-choice problems in Chinese on solid geometry. SolidMath is built from in-domain data and Math3D is derived from out-of-domain data.
🛠️ QuickStart
Installation
We recommend using uv to manage the environment.
uv installation guide: https://docs.astral.sh/uv/getting-started/installation/#installing-uv
git clone git@github.com:OpenSenseNova/SenseNova-SI.git
cd SenseNova-SI/
uv sync --extra cu124 # or one of [cu118|cu121|cu124|cu126|cu128|cu129], depending on your CUDA version
uv sync
source .venv/bin/activate
Hello World
A simple image-free test to verify environment setup and download the model.
python example.py \
--question "Hello" \
--model_path sensenova/SenseNova-SI-1.5-InternVL3-8B
Examples
Example 1
This example is from SITE-Bench:
python example.py \
--image_paths examples/Q1_1.png \
--question "Consider the real-world 3D locations of the objects. Which is closer to the sink, the toilet paper or the towel?\nOptions: \nA. toilet paper\nB. towel\nGive me the answer letter directly. The best answer is:" \
--model_path sensenova/SenseNova-SI-1.5-InternVL3-8B
Details of Example 1
Q:Consider the real-world 3D locations of the objects. Which is closer to the sink, the toilet paper or the towel?\nOptions: \nA. toilet paper\nB. towel\nGive me the answer letter directly. The best answer is:
|
GT: A
Example 2
This example is from MMSI-Bench:
python example.py \
--image_paths examples/Q2_1.png examples/Q2_2.png \
--question "If the landscape painting is on the east side of the bedroom, where is the window located in the bedroom?\nOptions: A. North side, B. South side, C. West side, D. East side\nAnswer with the option's letter from the given choices directly. Enclose the option's letter within ``." \
--model_path sensenova/SenseNova-SI-1.5-InternVL3-8B
Details of Example 2
Q:If the landscape painting is on the east side of the bedroom, where is the window located in the bedroom?\nOptions: A. North side, B. South side, C. West side, D. East side\nAnswer with the option's letter from the given choices directly. Enclose the option's letter within ``.
|
|
GT: C
Example 3
This example demonstrates the model's capability in solid geometry(Three views):
python example.py \
--image_paths examples/Q3_1.png \
--question "Enclose your thinking process in <think> </think> tags and your final answer in <answer> </answer>" \
--model_path sensenova/SenseNova-SI-1.5-InternVL3-8B
Details of Example 3
Q: Enclose your thinking process in <think> </think> tags and your final answer in <answer> </answer>
|
GT: D
Example 4
This example demonstrates the model's capability in solid geometry(Nets of 3D Shapes):
python example.py \
--image_paths examples/Q4_1.png \
--question "请将你的思考过程放在<think></think>标签内,并将你的最终答案放在<answer></answer>标签内。" \
--model_path sensenova/SenseNova-SI-1.5-InternVL3-8B
Details of Example 4
Q: Enclose your thinking process in <think> </think> tags and your final answer in <answer> </answer>
|
GT: D
Example 5
This example demonstrates the model's capability in solid geometry(Three views):
python example.py \
--image_paths examples/Q5_1.png \
--question "请将你的思考过程放在<think></think>标签内,并将你的最终答案放在<answer></answer>标签内。" \
--model_path sensenova/SenseNova-SI-1.5-InternVL3-8B
Details of Example 5
Q: Enclose your thinking process in <think> </think> tags and your final answer in <answer> </answer>
|
GT: B
Example 6
This example demonstrates the model's capability in solid geometry(Three views):
python example.py \
--image_paths examples/Q6_1.png \
--question "请将你的思考过程放在<think></think>标签内,并将你的最终答案放在<answer></answer>标签内。" \
--model_path sensenova/SenseNova-SI-1.5-InternVL3-8B
Details of Example 6
Q: Enclose your thinking process in <think> </think> tags and your final answer in <answer> </answer>
|
GT: C
Example 7
This example demonstrates the model's capability in solid geometry(3D graphic reasoning):
python example.py \
--image_paths examples/Q7_1.png \
--question "请将你的思考过程放在<think></think>标签内,并将你的最终答案放在<answer></answer>标签内。" \
--model_path sensenova/SenseNova-SI-1.5-InternVL3-8B
Details of Example 7
Q: Enclose your thinking process in <think> </think> tags and your final answer in <answer> </answer>
|
GT: C
Example 8
This example demonstrates the model's capability in solid geometry(Three views):
python example.py \
--image_paths examples/Q8_1.png \
--question "请将你的思考过程放在<think></think>标签内,并将你的最终答案放在<answer></answer>标签内。" \
--model_path sensenova/SenseNova-SI-1.5-InternVL3-8B
Details of Example 8
Q: Enclose your thinking process in <think> </think> tags and your final answer in <answer> </answer>
|
GT: A
Evaluation
To reproduce the benchmark results above, please refer to EASI to evaluate SenseNova-SI on mainstream spatial intelligence benchmarks.
🖊️ Citation
@article{sensenova-si,
title = {Scaling Spatial Intelligence with Multimodal Foundation Models},
author = {Cai, Zhongang and Wang, Ruisi and Gu, Chenyang and Pu, Fanyi and Xu, Junxiang and Wang, Yubo and Yin, Wanqi and Yang, Zhitao and Wei, Chen and Sun, Qingping and Zhou, Tongxi and Li, Jiaqi and Pang, Hui En and Qian, Oscar and Wei, Yukun and Lin, Zhiqian and Shi, Xuanke and Deng, Kewang and Han, Xiaoyang and Chen, Zukai and Fan, Xiangyu and Deng, Hanming and Lu, Lewei and Pan, Liang and Li, Bo and Liu, Ziwei and Wang, Quan and Lin, Dahua and Yang, Lei},
journal = {arXiv preprint arXiv:2511.13719},
year = {2025}
}
- Downloads last month
- 68
Model tree for sensenova/SenseNova-SI-1.5-InternVL3-8B
Base model
OpenGVLab/InternVL3-8B-Pretrained