HugoMarkoff commited on
Commit
7d250bd
·
verified ·
1 Parent(s): 0994b41

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +247 -248
README.md CHANGED
@@ -1,251 +1,250 @@
1
  ---
2
- license: mit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  configs:
4
- - config_name: cluster_count_prediction
5
- data_files:
6
- - split: train
7
- path: cluster_count_prediction/train-*
8
- - config_name: clustering_supervised
9
- data_files:
10
- - split: train
11
- path: clustering_supervised/train-*
12
- - config_name: clustering_unsupervised
13
- data_files:
14
- - split: train
15
- path: clustering_unsupervised/train-*
16
- - config_name: dimensionality_reduction
17
- data_files:
18
- - split: train
19
- path: dimensionality_reduction/train-*
20
- - config_name: embeddings_bioclip2_vitl14
21
- data_files:
22
- - split: train
23
- path: embeddings_bioclip2_vitl14/train-*
24
- - config_name: embeddings_clip_vitl14
25
- data_files:
26
- - split: train
27
- path: embeddings_clip_vitl14/train-*
28
- - config_name: embeddings_dinov2_vitg14
29
- data_files:
30
- - split: train
31
- path: embeddings_dinov2_vitg14/train-*
32
- - config_name: embeddings_dinov3_vith16plus
33
- data_files:
34
- - split: train
35
- path: embeddings_dinov3_vith16plus/train-*
36
- - config_name: embeddings_siglip_vitb16
37
- data_files:
38
- - split: train
39
- path: embeddings_siglip_vitb16/train-*
40
- - config_name: model_comparison
41
- data_files:
42
- - split: train
43
- path: model_comparison/train-*
44
- - config_name: primary_benchmarking
45
- data_files:
46
- - split: train
47
- path: primary_benchmarking/train-*
48
- dataset_info:
49
- - config_name: cluster_count_prediction
50
- features:
51
- - name: filename
52
- dtype: string
53
- - name: content
54
- dtype: string
55
- splits:
56
- - name: train
57
- num_bytes: 2580249
58
- num_examples: 2
59
- download_size: 755518
60
- dataset_size: 2580249
61
- - config_name: clustering_supervised
62
- features:
63
- - name: filename
64
- dtype: string
65
- - name: content
66
- dtype: string
67
- splits:
68
- - name: train
69
- num_bytes: 10766
70
- num_examples: 1
71
- download_size: 6185
72
- dataset_size: 10766
73
- - config_name: clustering_unsupervised
74
- features:
75
- - name: filename
76
- dtype: string
77
- - name: content
78
- dtype: string
79
- splits:
80
- - name: train
81
- num_bytes: 6867
82
- num_examples: 1
83
- download_size: 4042
84
- dataset_size: 6867
85
- - config_name: dimensionality_reduction
86
- features:
87
- - name: filename
88
- dtype: string
89
- - name: content
90
- dtype: string
91
- splits:
92
- - name: train
93
- num_bytes: 6140
94
- num_examples: 1
95
- download_size: 4403
96
- dataset_size: 6140
97
- - config_name: embeddings_bioclip2_vitl14
98
- features:
99
- - name: model
100
- dtype: string
101
- - name: architecture
102
- dtype: string
103
- - name: taxon_class
104
- dtype: string
105
- - name: run
106
- dtype: int64
107
- - name: label
108
- dtype: string
109
- - name: image_path
110
- dtype: string
111
- - name: embedding
112
- list: float64
113
- splits:
114
- - name: train
115
- num_bytes: 751350000
116
- num_examples: 120000
117
- download_size: 565974442
118
- dataset_size: 751350000
119
- - config_name: embeddings_clip_vitl14
120
- features:
121
- - name: model
122
- dtype: string
123
- - name: architecture
124
- dtype: string
125
- - name: taxon_class
126
- dtype: string
127
- - name: run
128
- dtype: int64
129
- - name: label
130
- dtype: string
131
- - name: image_path
132
- dtype: string
133
- - name: embedding
134
- list: float64
135
- splits:
136
- - name: train
137
- num_bytes: 750870000
138
- num_examples: 120000
139
- download_size: 177918154
140
- dataset_size: 750870000
141
- - config_name: embeddings_dinov2_vitg14
142
- features:
143
- - name: model
144
- dtype: string
145
- - name: architecture
146
- dtype: string
147
- - name: taxon_class
148
- dtype: string
149
- - name: run
150
- dtype: int64
151
- - name: label
152
- dtype: string
153
- - name: image_path
154
- dtype: string
155
- - name: embedding
156
- list: float64
157
- splits:
158
- - name: train
159
- num_bytes: 1488390000
160
- num_examples: 120000
161
- download_size: 1134730246
162
- dataset_size: 1488390000
163
- - config_name: embeddings_dinov3_vith16plus
164
- features:
165
- - name: model
166
- dtype: string
167
- - name: architecture
168
- dtype: string
169
- - name: taxon_class
170
- dtype: string
171
- - name: run
172
- dtype: int64
173
- - name: label
174
- dtype: string
175
- - name: image_path
176
- dtype: string
177
- - name: embedding
178
- list: float64
179
- splits:
180
- - name: train
181
- num_bytes: 1243110000
182
- num_examples: 120000
183
- download_size: 946309257
184
- dataset_size: 1243110000
185
- - config_name: embeddings_siglip_vitb16
186
- features:
187
- - name: model
188
- dtype: string
189
- - name: architecture
190
- dtype: string
191
- - name: taxon_class
192
- dtype: string
193
- - name: run
194
- dtype: int64
195
- - name: label
196
- dtype: string
197
- - name: image_path
198
- dtype: string
199
- - name: embedding
200
- list: float64
201
- splits:
202
- - name: train
203
- num_bytes: 751110000
204
- num_examples: 120000
205
- download_size: 566884684
206
- dataset_size: 751110000
207
- - config_name: model_comparison
208
- features:
209
- - name: filename
210
- dtype: string
211
- - name: content
212
- dtype: string
213
- splits:
214
- - name: train
215
- num_bytes: 25506
216
- num_examples: 3
217
- download_size: 9316
218
- dataset_size: 25506
219
- - config_name: primary_benchmarking
220
- features:
221
- - name: model
222
- dtype: large_string
223
- - name: model_short
224
- dtype: large_string
225
- - name: dim_reduction
226
- dtype: large_string
227
- - name: clustering
228
- dtype: large_string
229
- - name: v_measure
230
- dtype: float64
231
- - name: adjusted_rand_index
232
- dtype: float64
233
- - name: normalized_mutual_info
234
- dtype: float64
235
- - name: homogeneity
236
- dtype: float64
237
- - name: completeness
238
- dtype: float64
239
- - name: n_clusters_pred
240
- dtype: float64
241
- - name: n_clusters_true
242
- dtype: float64
243
- - name: path
244
- dtype: large_string
245
- splits:
246
- - name: train
247
- num_bytes: 7046440
248
- num_examples: 27600
249
- download_size: 1641013
250
- dataset_size: 7046440
251
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: cc-by-4.0
3
+ task_categories:
4
+ - image-classification
5
+ - zero-shot-classification
6
+ tags:
7
+ - biology
8
+ - ecology
9
+ - wildlife
10
+ - camera-traps
11
+ - vision-transformers
12
+ - clustering
13
+ - zero-shot-learning
14
+ - biodiversity
15
+ - reproducibility
16
+ - benchmarking
17
+ - embeddings
18
+ - dinov3
19
+ - dinov2
20
+ - bioclip
21
+ - clip
22
+ - siglip
23
+ language:
24
+ - en
25
+ pretty_name: HUGO-Bench Paper Reproducibility Data
26
+ size_categories:
27
+ - 100K<n<1M
28
+ source_datasets:
29
+ - AI-EcoNet/HUGO-Bench
30
  configs:
31
+ - config_name: primary_benchmarking
32
+ data_files:
33
+ - split: train
34
+ path: "01_primary_benchmarking/*.csv"
35
+ default: true
36
+ - config_name: model_comparison
37
+ data_files:
38
+ - split: train
39
+ path: "02_model_comparison/*.json"
40
+ - config_name: dimensionality_reduction
41
+ data_files:
42
+ - split: train
43
+ path: "03_dimensionality_reduction/*.json"
44
+ - config_name: clustering_supervised
45
+ data_files:
46
+ - split: train
47
+ path: "04_clustering_supervised/*.json"
48
+ - config_name: clustering_unsupervised
49
+ data_files:
50
+ - split: train
51
+ path: "05_clustering_unsupervised/*.json"
52
+ - config_name: cluster_count_prediction
53
+ data_files:
54
+ - split: train
55
+ path: "06_cluster_count_prediction/*.json"
56
+ - config_name: scaling_tests
57
+ data_files:
58
+ - split: train
59
+ path: "09_scaling_tests/**/*.json"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
60
  ---
61
+
62
+ # HUGO-Bench Paper Reproducibility
63
+
64
+ **Supplementary data and reproducibility materials for the paper:**
65
+
66
+ > **Vision Transformers for Zero-Shot Clustering of Animal Images: A Comparative Benchmarking Study**
67
+ >
68
+ > Hugo Markoff, Stefan Hein Bengtson, Michael Ørsted
69
+ >
70
+ > Aalborg University, Denmark
71
+
72
+ ## Dataset Description
73
+
74
+ This repository contains complete experimental results, pre-computed embeddings, and execution logs from our comprehensive benchmarking study evaluating Vision Transformer models for zero-shot species-level clustering of camera trap images.
75
+
76
+ ### Relationship to HUGO-Bench
77
+
78
+ This dataset is derived from [HUGO-Bench](https://huggingface.co/datasets/AI-EcoNet/HUGO-Bench), which provides the source images and species annotations. While HUGO-Bench contains the **validated image crops** (139,111 images across 60 species), this repository provides:
79
+
80
+ - **Clustering results** from all 27,600 experimental configurations
81
+ - **Pre-computed embeddings** enabling reproduction without image access
82
+ - **Execution logs** for full experimental traceability
83
+
84
+ | Dataset | Content | Purpose |
85
+ |---------|---------|---------|
86
+ | [HUGO-Bench](https://huggingface.co/datasets/AI-EcoNet/HUGO-Bench) | 139,111 validated camera trap images | Source images for experiments |
87
+ | **This repository** | Results, embeddings, logs | Paper reproducibility |
88
+
89
+ ## Repository Structure
90
+
91
+ ```
92
+ ├── 01_primary_benchmarking/ # Full 27,600 configuration results
93
+ │ ├── clustering_analysis_complete.csv
94
+ │ ├── clustering_analysis_with_ami.csv
95
+ │ ├── comprehensive_vmeasure_by_class.json
96
+ │ └── images_run_*.json # Subsample definitions (10 runs)
97
+
98
+ ├── 02_model_comparison/ # 5 ViT model comparison
99
+ │ ├── dinov3_all_combinations_results.json
100
+ │ ├── dinov3_bioclip_siglip_all_methods_results.json
101
+ │ └── dinov3_comparison_results.json
102
+
103
+ ├── 03_dimensionality_reduction/ # t-SNE, UMAP, PCA, Isomap, KPCA
104
+ │ └── dimensionality_comparison.json
105
+
106
+ ├── 04_clustering_supervised/ # K-variation experiments (K=15,30,45,90,180)
107
+ │ ├── k30_metrics_by_class.json
108
+ │ └── k_variation_by_dimred_class.json
109
+
110
+ ├── 05_clustering_unsupervised/ # HDBSCAN vs DBSCAN
111
+ │ └── unsupervised_metrics_by_class.json
112
+
113
+ ├── 06_cluster_count_prediction/ # Progressive species testing (1,200 runs)
114
+ │ ├── progressive_species_testing_results.json
115
+ │ └── progressive_species_testing_results_expanded.json
116
+
117
+ ├── 07_intra_species_variation/ # Age, sex, pelage detection
118
+ │ ├── wolf_dbscan_clusters/
119
+ │ └── intra_cluster/
120
+
121
+ ├── 08_uneven_distribution/ # Long-tailed distribution tests
122
+ │ ├── extreme_20_max_test/
123
+ │ ├── original_config_extreme_uneven_test/
124
+ │ └── even_distribution_results.json
125
+
126
+ ├── 09_scaling_tests/ # 5-60 species scaling behavior
127
+ │ ├── scaling_test_results/
128
+ │ └── different_n_test/
129
+
130
+ ├── 10_embeddings/ # Pre-computed embeddings
131
+ │ ├── embeddings/ # Standard benchmarking embeddings
132
+ │ ├── extreme_uneven_embeddings/
133
+ │ └── extreme_uneven_image_lists/
134
+
135
+ └── execution_logs/ # Complete execution logs
136
+ ├── clustering_dimred_log.txt
137
+ ├── clustering_complete_log.txt
138
+ └── ...
139
+ ```
140
+
141
+ ## Key Results Summary
142
+
143
+ Our benchmarking evaluated **27,600 configurations** across:
144
+ - **5 ViT Models**: DINOv3, DINOv2, BioCLIP 2, CLIP, SigLIP
145
+ - **5 Dimensionality Reduction**: t-SNE, UMAP, PCA, Isomap, Kernel PCA
146
+ - **4 Clustering Algorithms**: Hierarchical, GMM, HDBSCAN, DBSCAN
147
+ - **60 Species**: 30 mammals + 30 birds from camera trap imagery
148
+
149
+ ### Top Performing Configuration
150
+
151
+ | Component | Best Choice | V-Measure |
152
+ |-----------|-------------|-----------|
153
+ | Model | DINOv3 | 0.958 |
154
+ | Dim. Reduction | t-SNE | +26-38pp vs others |
155
+ | Clustering (supervised) | Hierarchical K=30 | 0.958 |
156
+ | Clustering (unsupervised) | HDBSCAN | 0.943 |
157
+
158
+ ## Usage
159
+
160
+ ### Loading Results with Python
161
+
162
+ ```python
163
+ import pandas as pd
164
+ import json
165
+
166
+ # Load primary benchmarking results
167
+ results = pd.read_csv("01_primary_benchmarking/clustering_analysis_complete.csv")
168
+
169
+ # Filter for best model
170
+ dinov3_results = results[results['model'] == 'dinov3']
171
+
172
+ # Load JSON metrics
173
+ with open("05_clustering_unsupervised/unsupervised_metrics_by_class.json") as f:
174
+ unsupervised = json.load(f)
175
+ ```
176
+
177
+ ### Using Pre-computed Embeddings
178
+
179
+ The `10_embeddings/` folder contains pre-computed embeddings that allow running clustering experiments **without needing the original images**:
180
+
181
+ ```python
182
+ import numpy as np
183
+ import json
184
+
185
+ # Load embeddings
186
+ embeddings = np.load("10_embeddings/embeddings/dinov3_embeddings.npy")
187
+
188
+ # Load corresponding image list
189
+ with open("01_primary_benchmarking/images_run_1.json") as f:
190
+ image_list = json.load(f)
191
+ ```
192
+
193
+ ### Reproducing Paper Tables
194
+
195
+ Each folder corresponds to specific paper sections:
196
+
197
+ | Paper Section | Data Folder |
198
+ |--------------|-------------|
199
+ | Table 3 (V-measure by model) | `01_primary_benchmarking/` |
200
+ | Table 4 (Dim. reduction comparison) | `03_dimensionality_reduction/` |
201
+ | Table 5 (Supervised K variation) | `04_clustering_supervised/` |
202
+ | Table 6 (Unsupervised comparison) | `05_clustering_unsupervised/` |
203
+ | Figure 5 (Cluster count prediction) | `06_cluster_count_prediction/` |
204
+ | Table 7 (Intra-species traits) | `07_intra_species_variation/` |
205
+ | Table 8 (Uneven distribution) | `08_uneven_distribution/` |
206
+ | Figure 8 (Scaling behavior) | `09_scaling_tests/` |
207
+
208
+ ## File Formats
209
+
210
+ | Extension | Description | How to Load |
211
+ |-----------|-------------|-------------|
212
+ | `.csv` | Tabular results | `pandas.read_csv()` |
213
+ | `.json` | Structured metrics | `json.load()` |
214
+ | `.npy` | NumPy embeddings | `numpy.load()` |
215
+ | `.txt`/`.log` | Execution logs | Plain text |
216
+
217
+ ## Citation
218
+
219
+ If you use this data, please cite both the paper and HUGO-Bench:
220
+
221
+ ```bibtex
222
+ @article{markoff2025vit_clustering,
223
+ title={Vision Transformers for Zero-Shot Clustering of Animal Images: A Comparative Benchmarking Study},
224
+ author={Markoff, Hugo and Bengtson, Stefan Hein and {\O}rsted, Michael},
225
+ journal={TBD},
226
+ year={2025}
227
+ }
228
+
229
+ @dataset{hugo_bench,
230
+ title={HUGO-Bench: A Benchmark Dataset for Camera Trap Species Clustering},
231
+ author={AI-EcoNet},
232
+ year={2025},
233
+ url={https://huggingface.co/datasets/AI-EcoNet/HUGO-Bench}
234
+ }
235
+ ```
236
+
237
+ ## License
238
+
239
+ This dataset is released under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/).
240
+
241
+ ## Contact
242
+
243
+ - **Hugo Markoff** - khbm@bio.aau.dk
244
+ - Department of Chemistry and Bioscience, Aalborg University
245
+
246
+ ## Related Resources
247
+
248
+ - 📊 [HUGO-Bench Dataset](https://huggingface.co/datasets/AI-EcoNet/HUGO-Bench) - Source images (139,111 validated crops)
249
+ - 💻 [GitHub Repository](https://github.com/HugoMarkoff/animal_visual_transformer) - Code and scripts
250
+ - 🌐 [Interactive Visualization](https://hugomarkoff.github.io/animal_visual_transformer/) - Explore clustering results