HugoMarkoff commited on
Commit
557c268
·
verified ·
1 Parent(s): 750380d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +233 -233
README.md CHANGED
@@ -1,233 +1,233 @@
1
- ---
2
- license: cc-by-4.0
3
- task_categories:
4
- - image-classification
5
- - zero-shot-classification
6
- tags:
7
- - biology
8
- - ecology
9
- - wildlife
10
- - camera-traps
11
- - vision-transformers
12
- - clustering
13
- - zero-shot-learning
14
- - biodiversity
15
- - reproducibility
16
- - benchmarking
17
- - embeddings
18
- - dinov3
19
- - dinov2
20
- - bioclip
21
- - clip
22
- - siglip
23
- language:
24
- - en
25
- pretty_name: HUGO-Bench Paper Reproducibility Data
26
- size_categories:
27
- - 100K<n<1M
28
- source_datasets:
29
- - AI-EcoNet/HUGO-Bench
30
- configs:
31
- - config_name: primary_benchmarking
32
- data_files: primary_benchmarking/train-*.parquet
33
- default: true
34
- - config_name: model_comparison
35
- data_files: model_comparison/train-*.parquet
36
- - config_name: dimensionality_reduction
37
- data_files: dimensionality_reduction/train-*.parquet
38
- - config_name: clustering_supervised
39
- data_files: clustering_supervised/train-*.parquet
40
- - config_name: clustering_unsupervised
41
- data_files: clustering_unsupervised/train-*.parquet
42
- - config_name: cluster_count_prediction
43
- data_files: cluster_count_prediction/train-*.parquet
44
- - config_name: intra_species_variation
45
- data_files: intra_species_variation/train-*.parquet
46
- - config_name: scaling_tests
47
- data_files: scaling_tests/train-*.parquet
48
- - config_name: uneven_distribution
49
- data_files: uneven_distribution/train-*.parquet
50
- - config_name: subsample_definitions
51
- data_files: subsample_definitions/train-*.parquet
52
- - config_name: embeddings_dinov3_vith16plus
53
- data_files: embeddings_dinov3_vith16plus/train-*.parquet
54
- - config_name: embeddings_dinov2_vitg14
55
- data_files: embeddings_dinov2_vitg14/train-*.parquet
56
- - config_name: embeddings_bioclip2_vitl14
57
- data_files: embeddings_bioclip2_vitl14/train-*.parquet
58
- - config_name: embeddings_clip_vitl14
59
- data_files: embeddings_clip_vitl14/train-*.parquet
60
- - config_name: embeddings_siglip_vitb16
61
- data_files: embeddings_siglip_vitb16/train-*.parquet
62
- ---
63
-
64
- # HUGO-Bench Paper Reproducibility
65
-
66
- **Supplementary data and reproducibility materials for the paper:**
67
-
68
- > **Vision Transformers for Zero-Shot Clustering of Animal Images: A Comparative Benchmarking Study**
69
- >
70
- > Hugo Markoff, Stefan Hein Bengtson, Michael Ørsted
71
- >
72
- > Aalborg University, Denmark
73
-
74
- ## Dataset Description
75
-
76
- This repository contains complete experimental results, pre-computed embeddings, and execution logs from our comprehensive benchmarking study evaluating Vision Transformer models for zero-shot clustering of wildlife camera trap images.
77
-
78
- ### Related Resources
79
-
80
- - **Source Images**: [AI-EcoNet/HUGO-Bench](https://huggingface.co/datasets/AI-EcoNet/HUGO-Bench) - 139,111 wildlife images
81
- - **Code Repository**: Coming soon
82
-
83
- ## Repository Structure
84
-
85
- ```
86
- ├── primary_benchmarking/ # Main benchmark results (27,600 configurations)
87
- ├── model_comparison/ # Cross-model comparisons
88
- ├── dimensionality_reduction/ # UMAP/t-SNE/PCA analysis
89
- ├── clustering_supervised/ # Supervised clustering metrics
90
- ├── clustering_unsupervised/ # Unsupervised clustering results
91
- ├── cluster_count_prediction/ # Optimal cluster count analysis
92
- ├── intra_species_variation/ # Within-species cluster analysis
93
- │ ├── train-*.parquet # Analysis results
94
- │ └── cluster_image_mappings.json # Image-to-cluster assignments
95
- ├── scaling_tests/ # Sample size scaling experiments
96
- ├── uneven_distribution/ # Class imbalance experiments
97
- ├── subsample_definitions/ # Reproducible subsample definitions
98
- ├── embeddings_*/ # Pre-computed embeddings (5 models)
99
- │ ├── embeddings_dinov3_vith16plus/ # 120K embeddings, 1280-dim
100
- │ ├── embeddings_dinov2_vitg14/ # 120K embeddings, 1536-dim
101
- │ ├── embeddings_bioclip2_vitl14/ # 120K embeddings, 768-dim
102
- │ ├── embeddings_clip_vitl14/ # 120K embeddings, 768-dim
103
- │ └── embeddings_siglip_vitb16/ # 120K embeddings, 768-dim
104
- ├── extreme_uneven_embeddings/ # Full dataset embeddings (PKL)
105
- │ ├── aves_full_dinov3_embeddings.pkl # 74,396 embeddings
106
- │ └── mammalia_full_dinov3_embeddings.pkl # 65,484 embeddings
107
- └── execution_logs/ # Experiment execution logs
108
- ```
109
-
110
- ## Quick Start
111
-
112
- ### Load Primary Benchmark Results
113
-
114
- ```python
115
- from datasets import load_dataset
116
-
117
- # Load main benchmark results (27,600 configurations)
118
- ds = load_dataset("AI-EcoNet/HUGO-Bench-Paper-Reproducibility", "primary_benchmarking")
119
- print(f"Configurations: {len(ds['train'])}")
120
- ```
121
-
122
- ### Load Pre-computed Embeddings
123
-
124
- ```python
125
- # Load DINOv3 embeddings (120,000 images)
126
- embeddings = load_dataset(
127
- "AI-EcoNet/HUGO-Bench-Paper-Reproducibility",
128
- "embeddings_dinov3_vith16plus"
129
- )
130
- print(f"Embeddings shape: {len(embeddings['train'])} x {len(embeddings['train'][0]['embedding'])}")
131
- ```
132
-
133
- ### Load Specific Analysis Results
134
-
135
- ```python
136
- # Model comparison results
137
- model_comp = load_dataset("AI-EcoNet/HUGO-Bench-Paper-Reproducibility", "model_comparison")
138
-
139
- # Scaling test results
140
- scaling = load_dataset("AI-EcoNet/HUGO-Bench-Paper-Reproducibility", "scaling_tests")
141
-
142
- # Intra-species variation analysis
143
- intra = load_dataset("AI-EcoNet/HUGO-Bench-Paper-Reproducibility", "intra_species_variation")
144
- ```
145
-
146
- ### Load Cluster Image Mappings
147
-
148
- The intra-species analysis includes a mapping file showing which images belong to which clusters:
149
-
150
- ```python
151
- from huggingface_hub import hf_hub_download
152
- import json
153
-
154
- # Download mapping file
155
- mapping_file = hf_hub_download(
156
- "AI-EcoNet/HUGO-Bench-Paper-Reproducibility",
157
- "intra_species_variation/cluster_image_mappings.json",
158
- repo_type="dataset"
159
- )
160
-
161
- with open(mapping_file) as f:
162
- mappings = json.load(f)
163
-
164
- # Structure: {species: {run: {cluster: [image_names]}}}
165
- print(f"Species analyzed: {list(mappings.keys())}")
166
- ```
167
-
168
- ### Load Full Dataset Embeddings
169
-
170
- For the extreme uneven distribution experiments, we provide full dataset embeddings:
171
-
172
- ```python
173
- from huggingface_hub import hf_hub_download
174
- import pickle
175
-
176
- # Download Aves embeddings (74,396 images)
177
- pkl_file = hf_hub_download(
178
- "AI-EcoNet/HUGO-Bench-Paper-Reproducibility",
179
- "extreme_uneven_embeddings/aves_full_dinov3_embeddings.pkl",
180
- repo_type="dataset"
181
- )
182
-
183
- with open(pkl_file, 'rb') as f:
184
- data = pickle.load(f)
185
-
186
- print(f"Embeddings: {data['embeddings'].shape}") # (74396, 1280)
187
- print(f"Labels: {len(data['labels'])}")
188
- print(f"Paths: {len(data['paths'])}")
189
- ```
190
-
191
- ## Experimental Setup
192
-
193
- ### Models Evaluated
194
-
195
- | Model | Architecture | Embedding Dim | Pre-training |
196
- |-------|-------------|---------------|--------------|
197
- | DINOv3 | ViT-H/16+ | 1280 | Self-supervised |
198
- | DINOv2 | ViT-G/14 | 1536 | Self-supervised |
199
- | BioCLIP 2 | ViT-L/14 | 768 | Biology domain |
200
- | CLIP | ViT-L/14 | 768 | Contrastive |
201
- | SigLIP | ViT-B/16 | 768 | Sigmoid loss |
202
-
203
- ### Clustering Methods
204
-
205
- - K-Means, DBSCAN, HDBSCAN, Agglomerative, Spectral
206
- - GMM (Gaussian Mixture Models)
207
- - With and without dimensionality reduction (UMAP, t-SNE, PCA)
208
-
209
- ### Evaluation Metrics
210
-
211
- - **Supervised**: Adjusted Rand Index (ARI), Normalized Mutual Information (NMI), Accuracy, F1
212
- - **Unsupervised**: Silhouette Score, Calinski-Harabasz Index, Davies-Bouldin Index
213
-
214
- ## Citation
215
-
216
- If you use this dataset, please cite:
217
-
218
- ```bibtex
219
- @article{markoff2026vision,
220
- title={Vision Transformers for Zero-Shot Clustering of Animal Images: A Comparative Benchmarking Study},
221
- author={Markoff, Hugo and Bengtson, Stefan Hein and Ørsted, Michael},
222
- journal={[Journal/Conference]},
223
- year={2026}
224
- }
225
- ```
226
-
227
- ## License
228
-
229
- This dataset is released under the [CC-BY-4.0 License](https://creativecommons.org/licenses/by/4.0/).
230
-
231
- ## Contact
232
-
233
- For questions or issues, please open an issue in this repository or contact the authors.
 
1
+ ---
2
+ license: cc-by-4.0
3
+ task_categories:
4
+ - image-classification
5
+ - zero-shot-classification
6
+ tags:
7
+ - biology
8
+ - ecology
9
+ - wildlife
10
+ - camera-traps
11
+ - vision-transformers
12
+ - clustering
13
+ - zero-shot-learning
14
+ - biodiversity
15
+ - reproducibility
16
+ - benchmarking
17
+ - embeddings
18
+ - dinov3
19
+ - dinov2
20
+ - bioclip
21
+ - clip
22
+ - siglip
23
+ language:
24
+ - en
25
+ pretty_name: HUGO-Bench Paper Reproducibility Data
26
+ size_categories:
27
+ - 100K<n<1M
28
+ source_datasets:
29
+ - AI-EcoNet/HUGO-Bench
30
+ configs:
31
+ - config_name: primary_benchmarking
32
+ data_files: primary_benchmarking/train-*.parquet
33
+ default: true
34
+ - config_name: model_comparison
35
+ data_files: model_comparison/train-*.parquet
36
+ - config_name: dimensionality_reduction
37
+ data_files: dimensionality_reduction/train-*.parquet
38
+ - config_name: clustering_supervised
39
+ data_files: clustering_supervised/train-*.parquet
40
+ - config_name: clustering_unsupervised
41
+ data_files: clustering_unsupervised/train-*.parquet
42
+ - config_name: cluster_count_prediction
43
+ data_files: cluster_count_prediction/train-*.parquet
44
+ - config_name: intra_species_variation
45
+ data_files: intra_species_variation/train-*.parquet
46
+ - config_name: scaling_tests
47
+ data_files: scaling_tests/train-*.parquet
48
+ - config_name: uneven_distribution
49
+ data_files: uneven_distribution/train-*.parquet
50
+ - config_name: subsample_definitions
51
+ data_files: subsample_definitions/train-*.parquet
52
+ - config_name: embeddings_dinov3_vith16plus
53
+ data_files: embeddings_dinov3_vith16plus/train-*.parquet
54
+ - config_name: embeddings_dinov2_vitg14
55
+ data_files: embeddings_dinov2_vitg14/train-*.parquet
56
+ - config_name: embeddings_bioclip2_vitl14
57
+ data_files: embeddings_bioclip2_vitl14/train-*.parquet
58
+ - config_name: embeddings_clip_vitl14
59
+ data_files: embeddings_clip_vitl14/train-*.parquet
60
+ - config_name: embeddings_siglip_vitb16
61
+ data_files: embeddings_siglip_vitb16/train-*.parquet
62
+ ---
63
+
64
+ # HUGO-Bench Paper Reproducibility
65
+
66
+ **Supplementary data and reproducibility materials for the paper:**
67
+
68
+ > **Vision Transformers for Zero-Shot Clustering of Animal Images: A Comparative Benchmarking Study** - https://arxiv.org/abs/2602.03894
69
+ >
70
+ > Hugo Markoff, Stefan Hein Bengtson, Michael Ørsted
71
+ >
72
+ > Aalborg University, Denmark
73
+
74
+ ## Dataset Description
75
+
76
+ This repository contains complete experimental results, pre-computed embeddings, and execution logs from our comprehensive benchmarking study evaluating Vision Transformer models for zero-shot clustering of wildlife camera trap images.
77
+
78
+ ### Related Resources
79
+
80
+ - **Source Images**: [AI-EcoNet/HUGO-Bench](https://huggingface.co/datasets/AI-EcoNet/HUGO-Bench) - 139,111 wildlife images
81
+ - **Code Repository**: Coming soon
82
+
83
+ ## Repository Structure
84
+
85
+ ```
86
+ ├── primary_benchmarking/ # Main benchmark results (27,600 configurations)
87
+ ├── model_comparison/ # Cross-model comparisons
88
+ ├── dimensionality_reduction/ # UMAP/t-SNE/PCA analysis
89
+ ├── clustering_supervised/ # Supervised clustering metrics
90
+ ├── clustering_unsupervised/ # Unsupervised clustering results
91
+ ├── cluster_count_prediction/ # Optimal cluster count analysis
92
+ ├── intra_species_variation/ # Within-species cluster analysis
93
+ │ ├── train-*.parquet # Analysis results
94
+ │ └── cluster_image_mappings.json # Image-to-cluster assignments
95
+ ├── scaling_tests/ # Sample size scaling experiments
96
+ ├── uneven_distribution/ # Class imbalance experiments
97
+ ├── subsample_definitions/ # Reproducible subsample definitions
98
+ ├── embeddings_*/ # Pre-computed embeddings (5 models)
99
+ │ ├── embeddings_dinov3_vith16plus/ # 120K embeddings, 1280-dim
100
+ │ ├── embeddings_dinov2_vitg14/ # 120K embeddings, 1536-dim
101
+ │ ├── embeddings_bioclip2_vitl14/ # 120K embeddings, 768-dim
102
+ │ ├── embeddings_clip_vitl14/ # 120K embeddings, 768-dim
103
+ │ └── embeddings_siglip_vitb16/ # 120K embeddings, 768-dim
104
+ ├── extreme_uneven_embeddings/ # Full dataset embeddings (PKL)
105
+ │ ├── aves_full_dinov3_embeddings.pkl # 74,396 embeddings
106
+ │ └── mammalia_full_dinov3_embeddings.pkl # 65,484 embeddings
107
+ └── execution_logs/ # Experiment execution logs
108
+ ```
109
+
110
+ ## Quick Start
111
+
112
+ ### Load Primary Benchmark Results
113
+
114
+ ```python
115
+ from datasets import load_dataset
116
+
117
+ # Load main benchmark results (27,600 configurations)
118
+ ds = load_dataset("AI-EcoNet/HUGO-Bench-Paper-Reproducibility", "primary_benchmarking")
119
+ print(f"Configurations: {len(ds['train'])}")
120
+ ```
121
+
122
+ ### Load Pre-computed Embeddings
123
+
124
+ ```python
125
+ # Load DINOv3 embeddings (120,000 images)
126
+ embeddings = load_dataset(
127
+ "AI-EcoNet/HUGO-Bench-Paper-Reproducibility",
128
+ "embeddings_dinov3_vith16plus"
129
+ )
130
+ print(f"Embeddings shape: {len(embeddings['train'])} x {len(embeddings['train'][0]['embedding'])}")
131
+ ```
132
+
133
+ ### Load Specific Analysis Results
134
+
135
+ ```python
136
+ # Model comparison results
137
+ model_comp = load_dataset("AI-EcoNet/HUGO-Bench-Paper-Reproducibility", "model_comparison")
138
+
139
+ # Scaling test results
140
+ scaling = load_dataset("AI-EcoNet/HUGO-Bench-Paper-Reproducibility", "scaling_tests")
141
+
142
+ # Intra-species variation analysis
143
+ intra = load_dataset("AI-EcoNet/HUGO-Bench-Paper-Reproducibility", "intra_species_variation")
144
+ ```
145
+
146
+ ### Load Cluster Image Mappings
147
+
148
+ The intra-species analysis includes a mapping file showing which images belong to which clusters:
149
+
150
+ ```python
151
+ from huggingface_hub import hf_hub_download
152
+ import json
153
+
154
+ # Download mapping file
155
+ mapping_file = hf_hub_download(
156
+ "AI-EcoNet/HUGO-Bench-Paper-Reproducibility",
157
+ "intra_species_variation/cluster_image_mappings.json",
158
+ repo_type="dataset"
159
+ )
160
+
161
+ with open(mapping_file) as f:
162
+ mappings = json.load(f)
163
+
164
+ # Structure: {species: {run: {cluster: [image_names]}}}
165
+ print(f"Species analyzed: {list(mappings.keys())}")
166
+ ```
167
+
168
+ ### Load Full Dataset Embeddings
169
+
170
+ For the extreme uneven distribution experiments, we provide full dataset embeddings:
171
+
172
+ ```python
173
+ from huggingface_hub import hf_hub_download
174
+ import pickle
175
+
176
+ # Download Aves embeddings (74,396 images)
177
+ pkl_file = hf_hub_download(
178
+ "AI-EcoNet/HUGO-Bench-Paper-Reproducibility",
179
+ "extreme_uneven_embeddings/aves_full_dinov3_embeddings.pkl",
180
+ repo_type="dataset"
181
+ )
182
+
183
+ with open(pkl_file, 'rb') as f:
184
+ data = pickle.load(f)
185
+
186
+ print(f"Embeddings: {data['embeddings'].shape}") # (74396, 1280)
187
+ print(f"Labels: {len(data['labels'])}")
188
+ print(f"Paths: {len(data['paths'])}")
189
+ ```
190
+
191
+ ## Experimental Setup
192
+
193
+ ### Models Evaluated
194
+
195
+ | Model | Architecture | Embedding Dim | Pre-training |
196
+ |-------|-------------|---------------|--------------|
197
+ | DINOv3 | ViT-H/16+ | 1280 | Self-supervised |
198
+ | DINOv2 | ViT-G/14 | 1536 | Self-supervised |
199
+ | BioCLIP 2 | ViT-L/14 | 768 | Biology domain |
200
+ | CLIP | ViT-L/14 | 768 | Contrastive |
201
+ | SigLIP | ViT-B/16 | 768 | Sigmoid loss |
202
+
203
+ ### Clustering Methods
204
+
205
+ - K-Means, DBSCAN, HDBSCAN, Agglomerative, Spectral
206
+ - GMM (Gaussian Mixture Models)
207
+ - With and without dimensionality reduction (UMAP, t-SNE, PCA)
208
+
209
+ ### Evaluation Metrics
210
+
211
+ - **Supervised**: Adjusted Rand Index (ARI), Normalized Mutual Information (NMI), Accuracy, F1
212
+ - **Unsupervised**: Silhouette Score, Calinski-Harabasz Index, Davies-Bouldin Index
213
+
214
+ ## Citation
215
+
216
+ If you use this dataset, please cite:
217
+
218
+ ```bibtex
219
+ @article{markoff2026vision,
220
+ title={Vision Transformers for Zero-Shot Clustering of Animal Images: A Comparative Benchmarking Study},
221
+ author={Markoff, Hugo and Bengtson, Stefan Hein and Ørsted, Michael},
222
+ journal={[Journal/Conference]},
223
+ year={2026}
224
+ }
225
+ ```
226
+
227
+ ## License
228
+
229
+ This dataset is released under the [CC-BY-4.0 License](https://creativecommons.org/licenses/by/4.0/).
230
+
231
+ ## Contact
232
+
233
+ For questions or issues, please open an issue in this repository or contact the authors.