Datasets:
onnx unknown | __key__ stringlengths 22 54 | __url__ stringclasses 18
values |
|---|---|---|
"CAoSB3B5dG9yY2gaDDIuMTAuMCtjdTEyODr5j5YDCtwJChNtLmVtYmVkZGluZ3Mud2VpZ2h0CglpbnB1dF9pZHMSCWVtYmVkZGl(...TRUNCATED) | deqing--mamba2-300M-v5-mamba2_s128_b4_fp16_amp | "/tmp/hf-datasets-cache/medium/datasets/76610197079125-config-parquet-and-info-nnirp-NNIRP-dataset-s(...TRUNCATED) |
"CAoSB3B5dG9yY2gaDDIuMTAuMCtjdTEyODqgwukCCtwJChNtLmVtYmVkZGluZ3Mud2VpZ2h0CglpbnB1dF9pZHMSCWVtYmVkZGl(...TRUNCATED) | deqing--mamba2-300M-v5-mamba2_s128_b4_fp32 | "/tmp/hf-datasets-cache/medium/datasets/76610197079125-config-parquet-and-info-nnirp-NNIRP-dataset-s(...TRUNCATED) |
"CAoSB3B5dG9yY2gaDDIuMTAuMCtjdTEyODqbgZYDCtwJChNtLmVtYmVkZGluZ3Mud2VpZ2h0CglpbnB1dF9pZHMSCWVtYmVkZGl(...TRUNCATED) | deqing--mamba2-300M-v5-mamba2_s32_b4_fp16_amp | "/tmp/hf-datasets-cache/medium/datasets/76610197079125-config-parquet-and-info-nnirp-NNIRP-dataset-s(...TRUNCATED) |
"CAoSB3B5dG9yY2gaDDIuMTAuMCtjdTEyODqitekCCtwJChNtLmVtYmVkZGluZ3Mud2VpZ2h0CglpbnB1dF9pZHMSCWVtYmVkZGl(...TRUNCATED) | deqing--mamba2-300M-v5-mamba2_s32_b4_fp32 | "/tmp/hf-datasets-cache/medium/datasets/76610197079125-config-parquet-and-info-nnirp-NNIRP-dataset-s(...TRUNCATED) |
"CAoSB3B5dG9yY2gaDDIuMTAuMCtjdTEyODqitekCCtwJChNtLmVtYmVkZGluZ3Mud2VpZ2h0CglpbnB1dF9pZHMSCWVtYmVkZGl(...TRUNCATED) | deqing--mamba2-300M-v5-mamba2_s32_b8_fp32 | "/tmp/hf-datasets-cache/medium/datasets/76610197079125-config-parquet-and-info-nnirp-NNIRP-dataset-s(...TRUNCATED) |
"CAoSB3B5dG9yY2gaDDIuMTAuMCtjdTEyODqoweYCCtwJChNtLmVtYmVkZGluZ3Mud2VpZ2h0CglpbnB1dF9pZHMSCWVtYmVkZGl(...TRUNCATED) | deqing--mamba2-300M-v5-mamba2_s512_b4_fp32 | "/tmp/hf-datasets-cache/medium/datasets/76610197079125-config-parquet-and-info-nnirp-NNIRP-dataset-s(...TRUNCATED) |
null | google--mobilenet_v2_1 | "/tmp/hf-datasets-cache/medium/datasets/76610197079125-config-parquet-and-info-nnirp-NNIRP-dataset-s(...TRUNCATED) |
"CAoSB3B5dG9yY2gaDDIuMTAuMCtjdTEyODqL5XYK5wwKFm0uZW5jb2Rlci5jb252MS53ZWlnaHQSCl90b19jb3B5XzEaD25vZGV(...TRUNCATED) | openai--whisper-base_s1024_b128_fp16_amp | "/tmp/hf-datasets-cache/medium/datasets/76610197079125-config-parquet-and-info-nnirp-NNIRP-dataset-s(...TRUNCATED) |
"CAoSB3B5dG9yY2gaDDIuMTAuMCtjdTEyODrKwVIKhQ4KDmlucHV0X2ZlYXR1cmVzChZtLmVuY29kZXIuY29udjEud2VpZ2h0EgZ(...TRUNCATED) | openai--whisper-base_s1024_b128_fp32 | "/tmp/hf-datasets-cache/medium/datasets/76610197079125-config-parquet-and-info-nnirp-NNIRP-dataset-s(...TRUNCATED) |
"CAoSB3B5dG9yY2gaDDIuMTAuMCtjdTEyODqo33YK5wwKFm0uZW5jb2Rlci5jb252MS53ZWlnaHQSCl90b19jb3B5XzEaD25vZGV(...TRUNCATED) | openai--whisper-base_s1024_b16_fp16_amp | "/tmp/hf-datasets-cache/medium/datasets/76610197079125-config-parquet-and-info-nnirp-NNIRP-dataset-s(...TRUNCATED) |
NNIRP Dataset: How You Split Is What You Get
A dataset and evaluation protocol for predicting inference runtime of neural network models from their ONNX computational graphs. Contains ~107k profiling samples from 190 source configurations spanning 6 architecture families, organized into 125 clusters across 28 sub-families.
Dataset Summary
Each sample includes three data layers:
| Layer | Format | Size | Description |
|---|---|---|---|
| Profiling | .json |
~130 MB | Runtime, VRAM, and RAM statistics measured on an NVIDIA T4 GPU |
| PyG Features | .pt.zst |
~500 MB | PyTorch Geometric graph encodings with node, edge, and graph-level features |
| ONNX Graphs | .onnx |
~148 GB | Lightweight ONNX computational graphs (topology only, no trained weights) |
Architecture Families
| Family | Sub-families | Clusters | Source Configs |
|---|---|---|---|
| attention_decoder | 5 | β | β |
| attention_encoder | 11 | β | β |
| attention_encoder_decoder | 4 | β | β |
| convolutional | 2 | β | β |
| detection | 4 | β | β |
| recurrent | 2 | β | β |
| Total | 28 | 125 | 190 |
Dataset Structure
Data is organized as one tar.gz archive per source configuration per data layer:
NNIRP-dataset/
βββ manifests/
β βββ splits.json # Canonical train/val/test split
β βββ clusters.json # Cluster taxonomy (ID β cluster β sub-family β family)
β βββ hf_model_type_case_ids.json # HuggingFace model_type β source config ID
βββ profiling/
β βββ 792.tar.gz # Profiling JSONs for source config 792
β βββ 793.tar.gz
β βββ ... # 190 archives
βββ pyg-features/
β βββ 792.tar.gz # PyG .pt.zst files for source config 792
β βββ ... # 190 archives
βββ onnx-graphs/
βββ 792.tar.gz # ONNX .onnx files for source config 792
βββ ... # 190 archives
Each archive is named by its source configuration ID (integer) β the leaf level of the four-level hierarchy (family β sub-family β cluster β source configuration). Extracting an archive yields the sample files directly (flat, no nested directories). The manifests/clusters.json file provides the complete mapping from source configuration IDs to clusters, sub-families, and families.
Data Splits
The canonical cluster-atomic split ensures no cluster straddles two splits. All validation and test clusters satisfy a bigram coverage threshold (β₯0.80) against the training pool.
| Split | Source Configs | Clusters |
|---|---|---|
| Train | 117 | 87 |
| Val | 38 | 16 |
| Test | 35 | 22 |
Split assignments are defined in manifests/splits.json.
PyG Feature Schema
Each .pt.zst file is a zstandard-compressed PyTorch Geometric Data object:
| Field | Shape | Description |
|---|---|---|
x |
[N, 14] |
Node features: FLOPs, input/output/weight bytes, rank, dims, counts (log2-transformed) |
op_type_id |
[N] |
Operator type vocabulary index (88 ONNX operators + <UNK> at index 0) |
edge_index |
[2, E] |
Directed dataflow edges (COO format) |
edge_attr |
[E, 18] |
Edge features: port indices, tensor shape, rank, bytes, dtype one-hot |
u |
[1, 5] |
Graph-level features: log2(nodes, edges, total FLOPs, total bytes, batch size) |
y |
[1, 1] |
Prediction target: log2(runtime_ms) |
Profiling JSON Schema
Each .json file contains summary statistics (count, mean, median, variance, min, max) for:
Runtime (ms)β inference latencyPeak VRAM (MB)β GPU memory usagePeak RAM (MB)β system memory usagePeak Disk Usage (MB),Disk Read (MB),Disk Write (MB)
All measurements are from an NVIDIA T4 GPU with CUDA, using PyTorch eager-mode inference.
Loading Examples
Extract and load PyG features for one source configuration
import tarfile, io, json, zstandard, torch
# Extract a single source config's PyG features
with tarfile.open("pyg-features/900.tar.gz", "r:gz") as tar:
tar.extractall("pyg-features/900/")
# Load one sample
def load_pyg_sample(path: str):
dctx = zstandard.ZstdDecompressor()
with open(path, "rb") as f:
raw = dctx.decompress(f.read())
return torch.load(io.BytesIO(raw), weights_only=False)
data = load_pyg_sample("pyg-features/900/apple--aimv2-large-patch14-224-lit_im224_b4_fp32.pt.zst")
print(data.x.shape) # [N, 14] node features
print(data.edge_index.shape) # [2, E] edges
print(data.y) # log2(runtime_ms)
Extract and load profiling data
import tarfile, json
with tarfile.open("profiling/900.tar.gz", "r:gz") as tar:
tar.extractall("profiling/900/")
with open("profiling/900/apple--aimv2-large-patch14-224-lit_im224_b4_fp32.json") as f:
prof = json.load(f)
print(f"Runtime: {prof['Runtime (ms)']['mean']:.2f} ms")
print(f"VRAM: {prof['Peak VRAM (MB)']['mean']:.0f} MB")
Load split and taxonomy
import json
with open("manifests/splits.json") as f:
splits = json.load(f)
train_ids = splits["train"] # list of source config IDs
with open("manifests/clusters.json") as f:
clusters = json.load(f)
# Map source config ID β family
for cluster_name, info in clusters["clusters"].items():
family = clusters["subfamily_to_family"][info["sub_family"]]
for config_id in info["cases"]:
print(f" Config {config_id}: {cluster_name} ({family})")
Download a single source configuration via the HF Hub
from huggingface_hub import hf_hub_download
# Download one archive
path = hf_hub_download(
repo_id="nnirp/NNIRP-dataset",
filename="pyg-features/900.tar.gz",
repo_type="dataset",
)
Data Collection
Data was collected through a three-stage automated pipeline:
- ONNX export β neural network models are exported using a lightweight procedure that captures computational graph topology without trained weights
- GPU profiling β inference runtime, peak VRAM, and peak RAM are measured on an NVIDIA T4 GPU across multiple repetitions
- Feature encoding β ONNX graphs are converted to PyTorch Geometric
Dataobjects with structured node, edge, and graph-level features
Parametric source configurations sweep hyperparameters (layer count, hidden dimension, batch size, precision) to generate dense scaling curves. HuggingFace source configurations profile real model checkpoints grouped by transformers model type.
Limitations
- All profiling was performed on a single GPU type (NVIDIA T4); predictions may not generalize to other hardware without re-profiling
- ONNX export coverage is incomplete for some operators and dynamic control flow patterns
- Runtime measurements reflect PyTorch eager-mode inference; optimized inference engines may show different characteristics
- Parametric source configurations account for ~22% of source configurations but ~89% of samples
License
CC-BY-NC-SA 4.0
Citation
@inproceedings{nnirp2026,
title={How You Split Is What You Get: A Dataset and Evaluation Protocol for Neural Network Inference Runtime Prediction},
author={Anonymous},
booktitle={NeurIPS 2026 Evaluations and Datasets Track},
year={2026}
}
- Downloads last month
- 24