Papers
arxiv:2602.06161

Stop the Flip-Flop: Context-Preserving Verification for Fast Revocable Diffusion Decoding

Published on Feb 5
· Submitted by
Yanzheng Xiang
on Feb 11
Authors:
,
,
,
,
,
,
,
,
,

Abstract

COVER enables efficient parallel decoding for diffusion language models by implementing cache override verification that reduces unnecessary revisions and maintains output quality through stable drafting and attention view construction.

AI-generated summary

Parallel diffusion decoding can accelerate diffusion language model inference by unmasking multiple tokens per step, but aggressive parallelism often harms quality. Revocable decoding mitigates this by rechecking earlier tokens, yet we observe that existing verification schemes frequently trigger flip-flop oscillations, where tokens are remasked and later restored unchanged. This behaviour slows inference in two ways: remasking verified positions weakens the conditioning context for parallel drafting, and repeated remask cycles consume the revision budget with little net progress. We propose COVER (Cache Override Verification for Efficient Revision), which performs leave-one-out verification and stable drafting within a single forward pass. COVER constructs two attention views via KV cache override: selected seeds are masked for verification, while their cached key value states are injected for all other queries to preserve contextual information, with a closed form diagonal correction preventing self leakage at the seed positions. COVER further prioritises seeds using a stability aware score that balances uncertainty, downstream influence, and cache drift, and it adapts the number of verified seeds per step. Across benchmarks, COVER markedly reduces unnecessary revisions and yields faster decoding while preserving output quality.

Community

Paper author Paper submitter

We found a silly failure mode in Parallel Revocable Diffusion Decoding: flip-flop . A token gets ReMask’ed… then comes back unchanged. In the existing approach, <1% of ReMasks actually change the token (≈99% wasted).

ScreenShot_2026-02-09_110222_438

We propose COVER which verifies without nuking context: mask seeds for leave-one-out, but inject their cached K,V for everyone else. A simple diagonal correction removes self-leakage. Result: fewer useless revisions + faster parallel drafting.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2602.06161 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2602.06161 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2602.06161 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.