ποΈ Cataract-LMM: Large-Scale Multi-Source Multi-Task Benchmark for Surgical AI
Welcome to the official repository for the Cataract-LMM dataset. Hosted on Hugging Face, this dataset represents a comprehensive, clinically representative benchmark designed to accelerate deep learning research in surgical video analysis. By bridging the gap between isolated, single-task datasets and the complex reality of surgical environments, Cataract-LMM provides the robust data necessary to train generalizable, multi-task Computer-Assisted Surgery (CAS) systems.
π Table of Contents
- π Publication Details
- π Dataset Overview
- ποΈ The Five Data Subsets
- π Global Directory Structure
- π Naming Nomenclature & Traceability
- π Versioning & Updates
- π Citation & Academic Request
- π¬ Contact & Connect
π Publication Details
This dataset is the foundation of the following research paper. If you find this repository useful, please consider reading and citing our work:
Cataract-LMM: Large-Scale Multi-Source Multi-Task Benchmark for Deep Learning in Surgical Video Analysis
Authors:
Mohammad Javad AhmadiΒΉ, Iman GandomiΒΉ, Parisa AbdiΒ², Seyed-Farzad MohammadiΒ², Amirhossein TaslimiΒΉ, Mehdi KhodaparastΒ², Hassan HashemiΒ³, Mahdi Tavakoliβ΄, Hamid D. TaghiradΒΉAffiliations:
ΒΉ Applied Robotics and AI Solutions (ARAS), Faculties of Electrical and Computer Engineering, K.N. Toosi University of Technology, Tehran, Iran
Β² Translational Ophthalmology Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
Β³ Noor Ophthalmology Research Center, Noor Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
β΄ Departments of Electrical and Computer Engineering & Biomedical Engineering, University of Alberta, Edmonton, AB, Canada
π Dataset Overview
Cataract-LMM provides an unprecedented scale and depth of annotation for phacoemulsification cataract surgery, enabling researchers to tackle real-world clinical variations.
- Massive Scale: 3,000 complete surgical procedures encompassing 1,134.2 hours of continuous footage.
- Multi-Source Heterogeneity (Domain Shift): Prospectively collected from two distinct clinical centers, ensuring rigorous hardware and procedural diversity:
- Center S1 (Farabi Eye Hospital): 2,930 procedures acquired via Haag-Streit HS Hi-R NEO 900 (720Γ480 resolution @ 30 fps).
- Center S2 (Noor Eye Hospital): 70 procedures acquired via ZEISS ARTEVO 800 digital microscope (1920Γ1080 resolution @ 60 fps).
- Procedural Diversity: Captures stochastically varied workflows, unscripted intra-operative events, and a broad spectrum of surgical proficiency ranging from novice residents to expert attendings.
ποΈ The Five Data Subsets
To facilitate modular access and targeted research, the dataset is stratified into five primary sub-repositories. Each directory is enriched with distinct, complementary layers of annotation.
π‘ Usage Tip: Each subdirectory contains its own dedicated
README.mdfile detailing exact data formats and extraction guidelines.
1οΈβ£ Phase Recognition
- Scope: 150 full procedures.
- Annotations: Frame-wise temporal boundaries for 13 distinct surgical phases (e.g., Incision, Phacoemulsification, Idle).
- Use Cases: Automated surgical workflow analysis, real-time causal inference, and procedural summarization.
2οΈβ£ Instance Segmentation
- Scope: 6,094 precisely annotated frames sampled across all phases.
- Annotations: Pixel-level polygon masks for 12 classes (2 anatomical structures, 10 specialized surgical instruments) provided in both COCO and YOLO formats.
- Use Cases: Detailed scene parsing, multi-class instrument recognition, and cross-center domain adaptation benchmarking.
3οΈβ£ Object Tracking
- Scope: 170 continuous video clips of the Capsulorhexis phase (469,118 densely annotated frames).
- Annotations: Spatiotemporal tracking featuring instance masks, persistent tracking IDs, bounding boxes, and functional keypoints (instrument tips, centroids).
- Use Cases: Surgical instrument tracking (SOT/MOTS), kinematic analysis, and derivation of objective motion economy metrics.
4οΈβ£ Skill Assessment
- Scope: The exact same 170 capsulorhexis video clips utilized in the Object Tracking subset.
- Annotations: Objective surgical skill scores evaluated on a 5-point continuous scale across 6 performance indicators (adapted from GRASIS/ICO-OSCAR), adjudicated by expert surgeons.
- Use Cases: Automated surgical evaluation, continuous skill regression, and linking geometric motion tracking to competency ratings.
5οΈβ£ Raw Videos
- Scope: The complete corpus of 3,000 entirely de-identified, unannotated surgical recordings.
- Use Cases: Self-Supervised Learning (SSL), Vision-Language Pre-training (VLP) via retrieval-augmented frameworks, and training controllable Generative AI models.
π Global Directory Structure
The repository is organized to maximize download stability and logical separation of tasks. Below is the high-level architecture:
π¦ Cataract-LMM (Root)
βββ πREADME.mdβ This global documentation file
βββ π1_Phase_Recognition/β Workflow and temporal phase annotations & clips
βββ π2_Instance_Segmentation/β COCO/YOLO masks and extracted static frames
βββ π3_Object_Tracking/β Continuous multi-layered tracking geometries
βββ π4_Skill_Assessment/β Expert-adjudicated clinical proficiency rubrics
βββ π5_Raw_Videos/β The massive 3,000-procedure unannotated corpus
π Naming Nomenclature & Traceability
To ensure flawless traceability across the multi-task subsets and the raw data pool, all files adhere to a strict, standardized naming convention (e.g., TR_0001_S1_P03 or RV_2253_S1):
- Task Prefix: Indicates the subset (
PH= Phase,SE= Segmentation,TR= Tracking,SK= Skill,RV= Raw Video). - Global ID / Subset Index: A unique identifier mapping the annotated subset directly back to the original raw video in the 3,000-procedure corpus.
- Clinical Source (
S1/S2): Indicates the origin of the acquisition, providing crucial metadata for domain adaptation and generalization benchmarking. - Procedural Phase (
P03): Where applicable (e.g., tracking/skill clips), denotes the specific isolated surgical phase (Capsulorhexis).
π Versioning & Updates
The Cataract-LMM dataset is a dynamically maintained benchmark. As new annotations, baseline models, or structural improvements are integrated, we will release updated versions.
To ensure you are working with the most current and robust data:
- Check the Version History: Navigate to the History / Commits Tab of this repository to view the latest updates and version tags.
- Stay Connected: Feel free to reach out to the author (contact info below) to inquire about upcoming updates, report potential dataset anomalies, or discuss integrating new annotation layers.
π Citation & Academic Request
The Cataract-LMM dataset is open-access and released under the CC-BY 4.0 license.
Our manuscript detailing the comprehensive methodology, algorithmic baselines, and technical validations of this dataset has been submitted to Nature Scientific Data. While the preprint is available for immediate reference on arXiv (arXiv:2510.16371), we professionally request that any publications, derivative works, or systems utilizing this dataset direct their citations to the final peer-reviewed journal version once it is officially published.
π¬ Contact & Connect
Mohammad Javad Ahmadi
I welcome collaborations, technical inquiries regarding the dataset, and discussions on advancing AI in medical applications. Feel free to connect with me through any of the channels below:
- π§ Academic Email: mjahmadi@email.kntu.ac.ir
- π§ Personal Email: mjahmadee@gmail.com
- Downloads last month
- 103