Access to this model has been disabled
DMCA Takedown notice - see https://huggingface.co/Granddyser/biglove-klein2/discussions/2
BigLove Klein 2 โ All Variants
All quantized versions of FLUX.2-klein-base-9B by Black Forest Labs, based on the BigLove Klein 2 finetune.
Available Files
| File | Format | Size | Use Case |
|---|---|---|---|
bigLove_klein2_Bf16.safetensors |
BF16 | ~18 GB | Full precision, best quality |
bigLove_klein2_bf16_pruned.safetensors |
BF16 (pruned) | ~18 GB | Pruned weights, slightly faster |
bigLove_klein2_fp8_pruned.safetensors |
FP8 (pruned) | ~9 GB | Good balance of quality & VRAM |
bigLove_klein2_nf4.safetensors |
NF4 | ~5 GB | Low VRAM, fast inference |
bigLove_klein2.gguf |
GGUF | varies | For GGUF-compatible loaders |
Usage
ComfyUI
Place the desired model file in your ComfyUI/models/diffusion_models/ (or unet) folder and select it in the appropriate loader node.
Diffusers
from diffusers import FluxPipeline
import torch
pipe = FluxPipeline.from_pretrained(
"Granddyser/biglove-klein2-fp8",
torch_dtype=torch.bfloat16
)
pipe.to("cuda")
image = pipe(
prompt="your prompt here",
num_inference_steps=4,
guidance_scale=0.0,
).images[0]
image.save("output.png")
Acknowledgments
Special thanks to SubtleShader for the motivation.
License
FLUX.2-klein-base-9B is licensed by Black Forest Labs. Inc. under the FLUX.2-klein-base-9B Non-Commercial License. Copyright Black Forest Labs. Inc.
Hardware compatibility
Log In to add your hardware
We're not able to determine the quantization variants.