Datasets:
Tasks:
Question Answering
Formats:
json
Sub-tasks:
multiple-choice-qa
Languages:
Korean
Size:
1K - 10K
License:
K-MetBench: A Multi-Dimensional Benchmark for Fine-Grained Evaluation of Expert Reasoning, Locality, and Multimodality in Meteorology
K-MetBench is a multi-dimensional benchmark for evaluating meteorology models across accuracy, reasoning quality, geo-cultural alignment, and fine-grained domain coverage.
The public eval protocol uses only the explicit advanced benchmark and the explicit reasoning benchmark followed by LLM-as-a-judge evaluation. The implicit split may be distributed with the dataset, but it is not part of the public eval kit.
Dataset Summary
- Total Questions: 1774
- Total Image References: 151 (59 question images, 92 choice images)
- Modality Split: text-only 1692, multimodal 82
- Reasoning Subset: 141
- Geo-Cultural Subset: 73
- Parts: Part 1: 373, Part 2: 332, Part 3: 359, Part 4: 376, Part 5: 334
- Format: JSON file with relative image paths under
data/images/
Data Format
Each sample contains:
| Field | Type | Description |
|---|---|---|
id |
int | Stable item identifier |
question.text |
string | Question text |
question.image |
string | Relative path to a question image, if present |
choices[].text |
string | Choice text |
choices[].image |
string | Relative path to a choice image, if present |
answer |
int | Zero-based correct choice index |
source |
string | Exam session source tag |
source_id |
int | Original source-local item id |
rationale |
string | Expert-verified reasoning text when available |
korean |
bool | Geo-cultural subset flag |
multimodal |
bool | Multimodal subset flag |
part |
int | Official part number (1-5) |
category |
object | Subject/topic metadata |
Usage
Loading the Dataset
from datasets import load_dataset
dataset = load_dataset(
"json",
data_files="https://huggingface.co/datasets/soyeonbot/K-MetBench/resolve/main/data/kmetbench.json",
split="test",
)
sample = dataset[0]
print(sample["question"]["text"])
print(sample["answer"])
Viewing Referenced Images
import requests
from io import BytesIO
from PIL import Image
image_rel_path = sample["question"]["image"]
image_url = "https://huggingface.co/datasets/soyeonbot/K-MetBench/resolve/main/data/images/" + image_rel_path
image = Image.open(BytesIO(requests.get(image_url, timeout=30).content))
image.show()
Running the Public Eval Kit
pip install -r requirements-eval.txt
python scripts/eval/eval_openai_compatible.py --model <model> --prompt-type advanced --explicit-data-file data/kmetbench.json --image-root data/images
python scripts/eval/eval_openai_compatible.py --model <model> --prompt-type reasoning --explicit-data-file data/kmetbench.json --image-root data/images
python scripts/eval/eval_reasoning_judge.py --model <model> --predictions <explicit_reasoning_json>
License
This dataset is released under CC BY-NC-SA 4.0.
Contact
For questions about the dataset, contact Soyeon Kim (soyeon.k@kaist.ac.kr).
Citation
@inproceedings{kim2026kmetbench,
title = {K-MetBench: A Multi-Dimensional Benchmark for Fine-Grained Evaluation of Expert Reasoning, Locality, and Multimodality in Meteorology},
author = {Kim, Soyeon and Kang, Cheongwoong and Lee, Myeongjin and Chang, Eun-Chul and Lee, Jaedeok and Choi, Jaesik},
booktitle = {The 64th Annual Meeting of the Association for Computational Linguistics},
year = {2026},
url = {https://openreview.net/forum?id=1Gn5pKek8k}
}
- Downloads last month
- 194