For more information (including how to compress models yourself), check out https://huggingface.co/DFloat11 and https://github.com/LeanModels/DFloat11
Feel free to request for other models for compression as well (for either the diffusers library, ComfyUI, or any other model), although models that use architectures which are unfamiliar to me might be more difficult.
How to Use
diffusers
import torch
from diffusers import ErnieImagePipeline, ErnieImageTransformer2DModel
from dfloat11 import DFloat11Model
# from transformers.modeling_utils import no_init_weights # for transformers version < 5.0.0
from transformers.initialization import no_init_weights # for transformers version >= 5.0.0
with no_init_weights():
transformer = ErnieImageTransformer2DModel.from_config(
ErnieImageTransformer2DModel.load_config(
"baidu/ERNIE-Image", subfolder="transformer"
),
torch_dtype=torch.bfloat16
).to(torch.bfloat16)
DFloat11Model.from_pretrained(
"mingyi456/ERNIE-Image-DF11",
device="cpu",
bfloat16_model=transformer,
)
pipe = ErnieImagePipeline.from_pretrained(
"baidu/ERNIE-Image",
transformer=transformer,
torch_dtype=torch.bfloat16
)
pipe.enable_model_cpu_offload()
prompt = "This is a photograph depicting an urban street scene. Shot at eye level, it shows a covered pedestrian or commercial street. Slightly below the center of the frame, a cyclist rides away from the camera toward the background, appearing as a dark silhouette against backlighting with indistinct details. The ground is paved with regular square tiles, bisected by a prominent tactile paving strip running through the scene, whose raised textures are clearly visible under the light. Light streams in diagonally from the right side of the frame, creating a strong backlight effect with a distinct Tyndall effect—visible light beams illuminating dust or vapor in the air and casting long shadows across the street. Several pedestrians appear on the left side and in the distance, some with their backs to the camera and others walking sideways, all rendered as silhouettes or semi-silhouettes. The overall color palette is warm, dominated by golden yellows and dark browns, evoking the atmosphere of dusk or early morning."
# `torch.inference_mode()` is important for `ErnieImagePipeline`, as by default it invokes
# `_gradient_checkpointing_func()` which makes the DF11 result differ from BF16
with torch.inference_mode():
image = pipe(
prompt=prompt,
height=1264,
width=848,
num_inference_steps=50,
guidance_scale=4.0,
use_pe=True # use prompt enhancer
).images[0]
image.save('image ernie-image.png')
ComfyUI
Refer to this model instead.
Compression details
This is the pattern_dict for compression:
pattern_dict = {
r"time_embedding": (
"linear_1",
"linear_2",
),
r"adaLN_modulation.1": [],
r"layers\.\d+": (
"self_attention.to_q",
"self_attention.to_k",
"self_attention.to_v",
"self_attention.to_out.0",
"mlp.gate_proj",
"mlp.up_proj",
"mlp.linear_fc2",
),
r"final_norm.linear": [],
}
- Downloads last month
- 3
Model tree for mingyi456/ERNIE-Image-DF11
Base model
baidu/ERNIE-Image