LTX 2.3 I2V-T2V Basic ID-Lora Workflow with reference audio By RuneXX
Running Windows. I think it has something to do with that. Anyone running windows and has this working?
I'm on windows too ;-) will check if anything has changed or something broke
Are you running portable or desktop?
I got it working. I had to replace all the files. Not just the nodes_lt.py
ah. You are on the desktop version of ComfyUI I guess then.
The desktop version the updates rolls out a bit slower, but the ID-Lora update will come eventually ;-)
Hey RuneXX, do you know, is this ID-LoRA for audio only or is it both audio and visual ID?
Great workflows RuneXX, thanks a lot. Which of your workflows do you actually find better for this purpose? LTX-2.3_-I2V_T2V_Basic_ID-Lora_reference_audio.json, LTX-2.3-I2V_T2V_Talking_Avatar(voice_clone_with_Fish-Audio-Pro).json or LTX-2.3_-I2V_T2V_Talking_Avatar(voice_clone_with_Qwen-TTS).json?
I have tried the ID-Lora and Fish Audio. I find that ID-Lora creates better cohesion with better facial motor skills. Fish Audio has slightly better voice quality and is less robotic, but the motor skills are significantly worse.
By the way, do you know a way to improve mouth and tooth quality?
Hey RuneXX, do you know, is this ID-LoRA for audio only or is it both audio and visual ID?
Its an audio only lora, to make consistent voice across multiple videos
By the way, do you know a way to improve mouth and tooth quality?
Could try more steps, or higher resolution.
And yes the ID lora is trained on expressions (face and voice), so it might be more expressive .. true
But you could make up for that with Fish Audio and the prompt.. probably. With prompts like "she talks with an expressive face and gesticulate as she talks with an emotional tone..." or something else that suits your input ;-) LTX loves a good prompt, for how to act .. and sequence of acting.
