Instructions to use models123/GLM-Image with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use models123/GLM-Image with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("models123/GLM-Image", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
| { | |
| "min_pixels": 262144, | |
| "max_pixels": 4194304, | |
| "do_rescale": true, | |
| "do_normalize": true, | |
| "do_resize": true, | |
| "patch_size": 16, | |
| "temporal_patch_size": 1, | |
| "merge_size": 1, | |
| "image_mean": [0.5, 0.5, 0.5], | |
| "image_std": [0.5, 0.5, 0.5], | |
| "image_processor_type": "GlmImageImageProcessor", | |
| "processor_class": "GlmImageProcessor", | |
| "resample": 3, | |
| "rescale_factor": 0.00392156862745098 | |
| } | |