iRULER: Intelligible Rubric-Based User-Defined LLM Evaluation for Revision
Large Language Models (LLMs) have become indispensable for evaluating writing. However, text feedback they provide is often unintelligible, generic, and not specific to user criteria. Inspired by structured rubrics in education and intelligible AI explanations, we propose iRULER following identified design guidelines to scaffold the review process by specific criteria, providing justification for score selection, and offering actionable revisions to target different quality levels. To qualify user-defined criteria, we recursively used iRULER with a rubric-of-rubrics to iteratively refine rubrics. In controlled experiments on writing revision and rubric creation, iRULER most improved validated LLM-judged review scores and was perceived as most helpful and aligned compared to read-only rubric and text-based LLM feedback. Qualitative findings further support how iRULER satisfies the design guidelines for user-defined feedback. This work contributes interactive rubric tools for intelligible LLM-based review and revision of writing, and user-defined rubric creation.
