Infatoshi's picture
initial upload: 7 problem definitions
80692f2 verified
metadata
license: mit
language:
  - en
tags:
  - gpu
  - cuda
  - kernels
  - benchmarks
  - code-generation
  - agents
size_categories:
  - n<1K
pretty_name: KernelBench-Hard Problems

KernelBench-Hard — Problem Definitions

The 7 problem definitions for KernelBench-Hard, a benchmark for autonomous LLM coding agents writing GPU kernels on a single Blackwell GPU (RTX PRO 6000, sm_120, CUDA 13.2).

Companion datasets:

Problems

id task shapes regime
01_fp8_gemm FP8 (E4M3) GEMM with bf16 accumulation 8 compute-bound
02_kda_cutlass Kimi Delta Attention forward (CUTLASS) 4 compute-bound
03_paged_attention Paged-attention decode (vLLM-style) 6 memory-bound
04_kahan_softmax Numerically stable softmax with Kahan compensation 5 memory-bound
05_topk_bitonic Top-k via bitonic select 6 memory-bound
06_sonic_moe_swiglu Sonic-MoE forward with SwiGLU 4 compute-bound
07_w4a16_gemm W4A16 weight-only quantized GEMM 8 compute-bound

File layout per problem

Each 0X_<name>/ directory contains:

file purpose
reference.py The PyTorch reference implementation. The agent must match its output.
check.py Correctness harness — reference vs. submission with torch.allclose tolerances
benchmark.py Timing harness — L2-flush + 30-trial median, prints shape= variant= tflops= gbps= ms= lines
problem.yaml Metadata: regime (compute/memory bound), tolerance, dtype, shapes
shapes.py Iterable of input shapes the benchmark runs
sota.py Hand-written SOTA reference (when available) for the upper-bound row in the leaderboard
PROMPT.txt The exact prompt fed to the agent harness

Scoring (peak_fraction)

For each (model, problem) cell, we compute peak_fraction ∈ [0, 1] as:

peak_fraction = geomean over shapes of (achieved_throughput / hardware_peak)

where achieved_throughput is TFLOPS for compute-bound or GB/s for memory-bound, and hardware_peak is the sm_120 spec (peak fp8 TFLOPS or peak HBM bandwidth). This rewards approaching the hardware ceiling rather than the easier-to-game "speedup over PyTorch."

A solution must first pass check.py (correctness) before it gets a peak_fraction.

Hardware

  • GPU: NVIDIA RTX PRO 6000 Blackwell Workstation
  • SM: sm_120a (Blackwell)
  • VRAM: 96 GB GDDR7
  • Peak HBM bandwidth: ~1800 GB/s
  • CUDA: 13.2 / NVCC 12.8 / Driver 595.58.03
  • Host: Ryzen 9950X3D, 92 GB DDR5

Rubric leaks (known issues)

Two of the seven problems leak the rubric — meaning the easiest path to a high score involves something other than writing a fast correct kernel. We publish anyway with these documented inline in benchmarks/hard/SPEC.md and per-cell in the runs dataset's annotation files:

  • 01_fp8_gemm — agents that downcast to bf16 + tensor cores get ~80–90% peak without doing the actual fp8 quantization. The judge model catches some of these but not all. See annotations with verdict: rubric_leak.
  • 04_kahan_softmax — agents that skip Kahan compensation pass check.py's atol within float32 limits and get a free ~2× speedup. The numerical instability only shows up at extreme magnitudes that the test inputs don't probe.

These are documented honestly because (a) we want the community to fix them, and (b) the rubric leaks themselves are interesting reward-hacking examples.

How to use

from datasets import load_dataset
# Or just clone the repo
# git clone https://huggingface.co/datasets/Infatoshi/kernelbench-hard-problems

To run a problem locally with your own kernel:

cd 01_fp8_gemm
# Drop your solution at solution.py
uv run python check.py        # verifies correctness
uv run python benchmark.py    # measures throughput

License

MIT. Cite as:

@misc{kernelbench-hard-2026,
  author = {Arledge, Elliot},
  title = {KernelBench-Hard: A GPU Kernel Engineering Benchmark for Autonomous Coding Agents},
  year = {2026},
  url = {https://kernelbench.com/hard},
  note = {Built on top of KernelBench (Ouyang et al., 2025).}
}

Original KernelBench: Ouyang et al., https://github.com/ScalingIntelligence/KernelBench