Switch to Wan 2.2 A14B + obsxrver LightX2V distill LoRAs (high/low per transformer) 0828ed4 daKhosa commited on 11 days ago
Init pipeline inside run_inference if pipe is None β ZeroGPU workers don't share globals 1b01b22 daKhosa commited on 12 days ago
Fix AttributeError in WanIPAttnProcessor β use getattr for norm_q and inner_dim eb5abef daKhosa commited on 12 days ago
Remove separate face reference input β input image doubles as face reference 8fa2834 daKhosa commited on 12 days ago
Clean up stale /data/hf_home on startup to recover from 50G storage eviction 13929cc daKhosa commited on 12 days ago
Revert HF_HOME redirect β was causing 50G storage eviction (Wan model is in base image, not /data/) fd8d035 daKhosa commited on 12 days ago
Redirect HF cache to persistent /data/hf_home to avoid re-downloading 30GB+ model on every cold start 647debc daKhosa commited on 12 days ago
Add identity-preserving BLINK LoRAs (NSFWcode, lopi999, Outfit), fix -H-/-L- pairing 7065e43 verified Daankular commited on 15 days ago
Add identity-preserving BLINK LoRAs (NSFWcode, lopi999, Outfit), fix -H-/-L- pairing 8b38980 verified Daankular commited on 15 days ago
Lower LoRA scale default 0.8->0.6, LightX2V low-noise fuse 1.0->0.7 for better identity preservation 3b4d0fe verified Daankular commited on 15 days ago
Re-enable int8 T5 quant (frees VRAM), remove VAE slicing for faster decode 16d3728 verified Daankular commited on 15 days ago
Enable VAE tiling+slicing to fix OOM during decode (T5 in bfloat16 uses more VRAM) 0a402b9 verified Daankular commited on 15 days ago
Switch to plain diffusers (load_into_transformer_2 merged upstream), drop fork + conversion patch 3d3c33d verified Daankular commited on 15 days ago
Switch to plain diffusers (load_into_transformer_2 merged upstream), drop fork + conversion patch d5ce530 verified Daankular commited on 15 days ago
Patch _convert_non_diffusers_wan_lora_to_diffusers: make head keys optional bc79507 verified Daankular commited on 15 days ago
Add SageAttn + LightX2V distill LoRA + remove text enc quant + flow_shift=6 11fda45 verified Daankular commited on 15 days ago
Add SageAttn + LightX2V distill LoRA + remove text enc quant + flow_shift=6 4c993de verified Daankular commited on 15 days ago
Add SageAttn + LightX2V distill LoRA + remove text enc quant + flow_shift=6 32edb4b verified Daankular commited on 15 days ago
Increase default steps 6->20 (prompt adherence), max steps 30->50, GPU cap 300->600s da75905 verified Daankular commited on 15 days ago
Revert guidance_scale to 1.0 and flow_shift to 3.0 (AOTI built for single-pass mode) 3715018 verified Daankular commited on 15 days ago
Fix guidance scale defaults (1β5), set_adapters per-transformer, flow_shift 3β6 0aad7cd verified Daankular commited on 15 days ago
Add extra LoRA repos from UnifiedHorusRA and obsxrver c45a518 verified Daankular commited on 15 days ago
Fix AOTI+LoRA: disable compiled blocks for LoRA path, adaptive duration 8ed22d5 verified Daankular commited on 15 days ago
Patch TorchaoLoraLinear for dynamic LoRA on fp8 model 1c0301e verified Daankular commited on 15 days ago
Fix gr.Video buttons kwarg - not supported in gradio 5.29 d535953 verified Daankular commited on 15 days ago
Update requirements.txt - r3gm base with LoRA gallery a8dd8eb verified Daankular commited on 15 days ago