Instructions to use coder119/Vectorartz_Diffusion with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use coder119/Vectorartz_Diffusion with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("coder119/Vectorartz_Diffusion", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
Do you mind sharing your training process for this?
#5
by samching - opened
Would love to understand how you achieved this - was it Dreambooth or something else?
If Dreambooth, could you share your process? Thanks!
I used TheLastBen repository of fast Dreambooth.
High quality samples are key in this type of training.
The settings while training this model was 40 images, 3000 steps at 25% text encoder training.