ClawBench: Can AI Agents Complete Everyday Online Tasks? Paper • 2604.08523 • Published 7 days ago • 253
Claw-Eval: Toward Trustworthy Evaluation of Autonomous Agents Paper • 2604.06132 • Published 9 days ago • 114
FORGE:Fine-grained Multimodal Evaluation for Manufacturing Scenarios Paper • 2604.07413 • Published 8 days ago • 89
GBQA: A Game Benchmark for Evaluating LLMs as Quality Assurance Engineers Paper • 2604.02648 • Published 13 days ago • 45
KnowU-Bench: Towards Interactive, Proactive, and Personalized Mobile Agent Evaluation Paper • 2604.08455 • Published 7 days ago • 43
ClawArena: Benchmarking AI Agents in Evolving Information Environments Paper • 2604.04202 • Published 11 days ago • 36
ClawsBench: Evaluating Capability and Safety of LLM Productivity Agents in Simulated Workspaces Paper • 2604.05172 • Published 10 days ago • 24
RubricBench: Aligning Model-Generated Rubrics with Human Standards Paper • 2603.01562 • Published Mar 2 • 63