-
mvp-lab/LLaVA-OneVision-1.5-Instruct-Data
Viewer • Updated • 21.9M • 73.1k • 63 -
mvp-lab/LLaVA-OneVision-1.5-Mid-Training-85M
Viewer • Updated • 91.5M • 325k • 55 -
lmms-lab/LLaVA-OneVision-1.5-8B-Instruct
Image-Text-to-Text • 9B • Updated • 42.4k • 61 -
lmms-lab/LLaVA-OneVision-1.5-4B-Instruct
Image-Text-to-Text • 5B • Updated • 2.18k • 16
AI & ML interests
multi-modal foundation models
Recent Activity
View all activity
-
mvp-lab/LLaVA-OneVision-1.5-Instruct-Data
Viewer • Updated • 21.9M • 73.1k • 63 -
mvp-lab/LLaVA-OneVision-1.5-Mid-Training-85M
Viewer • Updated • 91.5M • 325k • 55 -
lmms-lab/LLaVA-OneVision-1.5-8B-Instruct
Image-Text-to-Text • 9B • Updated • 42.4k • 61 -
lmms-lab/LLaVA-OneVision-1.5-4B-Instruct
Image-Text-to-Text • 5B • Updated • 2.18k • 16
datasets
6
mvp-lab/LLaVA-OneVision-1.5-RL-Data
Viewer
•
Updated
•
69.2k
•
342
•
5
mvp-lab/LLaVA-OneVision-1.5-Mid-Training-85M
Viewer
•
Updated
•
91.5M
•
325k
•
55
mvp-lab/LLaVA-OneVision-1.5-Instruct-Data
Viewer
•
Updated
•
21.9M
•
73.1k
•
63
mvp-lab/LLaVA-558K-Webdataset
Updated
•
482
•
3
mvp-lab/LLaVA-NeXT-780k-webdataset
Updated
•
1.73k
mvp-lab/LLaVA-OneVision-1.5-Mid-Training-Webdataset-Quick-Start-3M
Updated
•
4.04k
•
1