NEW LTX-2.3

Model Downloads:

Also made a seperation with the workflows to other location for LTX-2.3 https://huggingface.co/RuneXX/LTX-2.3-Workflows (but will update both places for now, not sure its needed to have two places, since the changes are not that dramatic)


22th February 2026: Model loading change

See more here: https://huggingface.co/Kijai/LTXV2_comfy Update: ComfyUI made a temporary "fix" so that the old workflows still work for now. In process of updating my workflows to reflect coming changes


The workflows are based on the extracted models from https://huggingface.co/Kijai/LTXV2_comfy The extracted models might run easier on your computer (as separate files), as well as GGUF support etc. (but you can easily swap out the model loader for the ComfyUI default model loader if you want to load the checkpoint with "all in one" vae built-in etc)

Model Downloads:

Needed nodes:

(video made with LTX-2, Credit to https://www.reddit.com/user/fantazart/) https://www.reddit.com/r/StableDiffusion/comments/1qeovkh/ltx2_cinematic_love_letter_to_opensource_community/


A general guide: https://docs.ltx.video/open-source-model/integration-tools/comfy-ui

More workflows:

ComfyUI official workflows: https://docs.comfy.org/tutorials/video/ltx/ltx-2

LTX-Video official workflows: https://github.com/Lightricks/ComfyUI-LTXVideo/tree/master/example_workflows

Some really nice clean workflows here: https://comfyui.nomadoor.net/en/basic-workflows/ltx-2/

RunComfy (can download workflow to use locally):

LTX-2 Controlnet (pose, depth etc) https://www.runcomfy.com/comfyui-workflows/ltx-2-controlnet-in-comfyui-depth-controlled-video-workflow

LTX-2 First Last Frame https://www.runcomfy.com/comfyui-workflows/ltx-2-first-last-frame-in-comfyui-audio-visual-motion-control

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support