nielsr HF Staff commited on
Commit
ee2b407
·
verified ·
1 Parent(s): a233326

Add model card for TerraScope

Browse files

Hi! I'm Niels from the Hugging Face community science team. I've opened this PR to add a model card for TerraScope.

This includes:
- Metadata for better discoverability (pipeline tag, library name, license).
- Links to the paper ([TerraScope: Pixel-Grounded Visual Reasoning for Earth Observation](https://huggingface.co/papers/2603.19039)), the project page, and the code repository.
- A description of the model's key capabilities in geospatial reasoning.

Files changed (1) hide show
  1. README.md +39 -0
README.md ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ library_name: transformers
4
+ pipeline_tag: image-text-to-text
5
+ tags:
6
+ - earth-observation
7
+ - geospatial-reasoning
8
+ - multimodal
9
+ - remote-sensing
10
+ ---
11
+
12
+ # TerraScope: Pixel-Grounded Visual Reasoning for Earth Observation
13
+
14
+ [**TerraScope**](https://shuyansy.github.io/terrascope/) is a unified Vision-Language Model (VLM) specifically designed for Earth Observation (EO). It addresses tasks that require grounding complex spatial reasoning in precise pixel-level visual representations.
15
+
16
+ - **Paper:** [TerraScope: Pixel-Grounded Visual Reasoning for Earth Observation](https://huggingface.co/papers/2603.19039)
17
+ - **Project Page:** [https://shuyansy.github.io/terrascope/](https://shuyansy.github.io/terrascope/)
18
+ - **Repository:** [https://github.com/shuyansy/Earth-Observation-VLMs](https://github.com/shuyansy/Earth-Observation-VLMs)
19
+
20
+ ## Model Description
21
+
22
+ TerraScope delivers pixel-grounded geospatial reasoning with two key capabilities:
23
+ 1. **Modality-flexible reasoning:** It handles single-modality inputs (optical or SAR) and adaptively fuses different modalities into the reasoning process when both are available.
24
+ 2. **Multi-temporal reasoning:** It integrates temporal sequences for change analysis across multiple time points.
25
+
26
+ The model was trained on **Terra-CoT**, a large-scale dataset containing 1 million samples with pixel-level masks embedded in reasoning chains. It demonstrates significant performance on **TerraScope-Bench**, a benchmark evaluating both answer accuracy and mask quality for geospatial tasks.
27
+
28
+ ## Citation
29
+
30
+ If you find this work useful, please consider citing:
31
+
32
+ ```bibtex
33
+ @article{shu2026terrascope,
34
+ title={TerraScope: Pixel-Grounded Visual Reasoning for Earth Observation},
35
+ author={Shu, Yan and Ren, Bin and Xiong, Zhitong and Zhu, Xiao Xiang and Demir, Beg{\"u}m and Sebe, Nicu and Rota, Paolo},
36
+ journal={arXiv preprint arXiv:2603.19039},
37
+ year={2026}
38
+ }
39
+ ```