AI & ML interests

None defined yet.

obsxrver 
posted an update 3 days ago
view post
Post
428
Announcement: FANVUE

Hey everyonee! Patreon took me down so now I am on fanvue. If you subscribe for $10 you get access to exclusive LoRAs, like I2Pee5XL, generation access on the Huggingface space, and more to come!
https://fanvue.com/obsxrver
obsxrver 
posted an update 6 days ago
obsxrver 
posted an update 5 months ago
view post
Post
5239
(https://github.com/obsxrver/wan22-lora-training)
If you’ve been wanting to train your own Wan 2.2 Video LoRAs but are intimidated by the hardware requirements, parameter tweaking insanity, or the installation nightmare—I built a solution that handles it all for you.

This is currently the easiest, fastest, and cheapest way to get a high-quality training run done.

Why this method?

* Zero Setup: No installing Python, CUDA, or hunting for dependencies. You launch a pre-built [Vast.AI](http://Vast.AI) template, and it's ready in minutes.
* Full WebUI: Drag-and-drop your videos/images, edit captions, and click "Start." No terminal commands required.
* Extremely Cheap: You can rent a dual RTX 5090 node, train a full LoRA in 2-3 hours, and auto-shutdown. Total cost is usually $3 or less.
* Auto-Save: It automatically uploads your finished LoRA to your Cloud Storage (Google Drive/S3/Dropbox) and kills the instance so you don't pay for a second longer than necessary.

How it works:

1. Click the Vast.AI template link (in the repo).
2. Open the WebUI in your browser.
3. Upload your dataset and press Train.
4. Come back in an hour to find your LoRA in your Google Drive.

It supports both Text-to-Video and Image-to-Video, and optimizes for dual-GPU setups (training High/Low noise simultaneously) to cut training time in half.
Repo + Template Link:
https://github.com/obsxrver/wan22-lora-training
Let me know
if you have questions
  • 13 replies
·