Gemma-4-26B-A4B-it — 22GB (MLX)
Mixed-precision quantized version of google/gemma-4-26B-A4B-it optimised by baa.ai using a proprietary Black Sheep AI method.
Metrics
| Metric | Value |
|--------|-------|
| Size | 19.8 GB |
| MMLU | 80.41% |
| MMLU vs BF16 | 99.5% of BF16 |
| MMLU vs Uniform 4-bit | +2.53pp |
All RAM Variants (all beat Uniform 4-bit)
| Model | Size | MMLU | MMLU % of BF16 |
|-------|------|------|----------------|
| BF16 | 51.6 GB | 80.80% | 100% |
| Uniform 4-bit | ~14 GB | 77.88% | 96.4% |
| RAM 14GB | ~14 GB | 78.27% | 96.9% |
| RAM 18GB | ~18 GB | 80.02% | 99.0% |
| RAM 20GB ★ | ~20 GB | 80.70% | 99.9% |
| RAM 22GB | ~22 GB | 80.41% | 99.5% |
Usage
from mlx_lm import load, generate
model, tokenizer = load("baa-ai/Gemma-4-26B-A4B-it-RAM-22GB-MLX")
response = generate(model, tokenizer, prompt="Hello!", max_tokens=256)
print(response)
Quantized by baa.ai
Black Sheep AI Products
Shepherd — Private AI deployment platform that shrinks frontier models by 50-60% through RAM compression, enabling enterprises to run sophisticated AI on single GPU instances or Apple Silicon hardware. Deploy in your VPC with zero data leaving your infrastructure. Includes CI/CD pipeline integration, fleet deployment across Apple Silicon clusters, air-gapped and sovereign deployment support, and multi-format export (MLX, GGUF). Annual cloud costs from ~$2,700 — or run on a Mac Studio for electricity only.
Watchman — Capability audit and governance platform for compressed AI models. Know exactly what your quantized model can do before it goes live. Watchman predicts which capabilities survive compression in minutes — replacing weeks of benchmarking. Includes compliance-ready reporting for regulated industries, quality valley warnings for counterproductive memory allocations, instant regression diagnosis tracing issues to specific tensors, and 22 adversarial security probes scanning for injection, leakage, hallucination, and code vulnerabilities.
Learn more at baa.ai — Sovereign AI.
- Downloads last month
- 779
4-bit
Model tree for baa-ai/Gemma-4-26B-A4B-it-RAM-22GB-MLX
Base model
google/gemma-4-26B-A4B-it