The dataset viewer is not available for this split.
Error code: RowsPostProcessingError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
SocialOmni: Benchmarking Audio-Visual Social Interactivity in Omni Models
Paper | GitHub | Project Page
SocialOmni is a comprehensive benchmark designed to evaluate the audio-visual social interactivity of Omni-modal Large Language Models (OLMs). Instead of focusing solely on static accuracy, SocialOmni measures whether a model can behave appropriately in real dialogues by evaluating three tightly coupled dimensions:
- Who is speaking: speaker separation and identification.
- When to enter: interruption timing control.
- How to respond: natural interruption generation.
The benchmark features 2,000 perception samples and a quality-controlled diagnostic set of 209 interaction-generation instances with strict temporal and contextual constraints, complemented by controlled audio-visual inconsistency scenarios to test model robustness.
Tasks
Task I: Perception (who)
Given a video clip and a timestamp t, the model identifies the active speaker from a set of candidates (e.g., A, B, C, or D).
Task II: Interaction Generation (when + how)
Given a video prefix and a candidate speaker, the model performs two sub-tasks:
- Q1 (
when): Decide if the speaker should interrupt immediately. - Q2 (
how): If an interruption is appropriate, generate the natural and contextually coherent content for that interruption.
Citation
@article{xie2026socialomni,
title={SocialOmni: Benchmarking Audio-Visual Social Interactivity in Omni Models},
author={Tianyu Xie and Jinfa Huang and Yuexiao Ma and Rongfang Luo and Yan Yang and Wang Chen and Yuhui Zeng and Ruize Fang and Yixuan Zou and Xiawu Zheng and Jiebo Luo and Rongrong Ji},
journal={arXiv preprint arXiv:2603.16859},
year={2026}
}
- Downloads last month
- 1,538