Improve dataset card

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +25 -2
README.md CHANGED
@@ -1,5 +1,28 @@
1
- Reproduce for CVRP 2026 paper Ego2Web: A Web Agent Benchmark Grounded in Egocentric Videos
2
-
3
  ---
4
  license: mit
 
 
5
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ task_categories:
4
+ - video-text-to-text
5
  ---
6
+
7
+ # Ego2Web: A Web Agent Benchmark Grounded in Egocentric Videos
8
+
9
+ [**Project Page**](https://ego2web.github.io/) | [**Paper**](https://huggingface.co/papers/2603.22529) | [**Code**](https://github.com/Yui010206/Ego2Web)
10
+
11
+ Ego2Web is the first benchmark designed to bridge egocentric video perception and web agent execution. It pairs real-world first-person video recordings with web tasks that require visual understanding, web task planning, and interaction in an online environment for successful completion.
12
+
13
+ The benchmark connects real-world human activities with web-based tasks, enabling research at the intersection of embodied perception and web interaction across diverse domains such as e-commerce, media retrieval, and knowledge lookup.
14
+
15
+ ## Citation
16
+
17
+ If you find this work useful, please cite:
18
+
19
+ ```bibtex
20
+ @article{yu2026ego2web,
21
+ title={Ego2Web: Benchmarking Web Agents with Egocentric Video Grounding},
22
+ author={Yu, Shoubin and Shu, Lei and Yang, Antoine and Fu, Yao and
23
+ Sunkara, Srinivas and Wang, Maria and Chen, Jindong and
24
+ Bansal, Mohit and Gong, Boqing},
25
+ journal={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
26
+ year={2026}
27
+ }
28
+ ```