Celeste-Gemma-4-26B-MoE-Platinum-GGUF (Platinum Series)
Welcome to the Platinum Series release of Gemma-4-26B-MoE. To combat the inherent fragility of 128-expert architectures during compression, every variant in this collection was forged with a custom Importance Matrix (i-matrix). By shielding the critical gating weights, weβve ensured that expert routing remains precise and intelligence stays intact, even at lower bit-rates.
High-fidelity GGUF weights for Gemma 4 (26B MoE / A4B).
π οΈ Technical Specifications
| Feature | Specification |
|---|---|
| Total Parameters | 25.23 Billion |
| Active Parameters | ~3.8 Billion (per token) |
| Expert Count | 128 Experts (8 active per token) |
| Quantization | i-Matrix Protected (Q3 - Q8 Variants) |
| Context Window | 128,000 Tokens |
| Position Embeddings | p-RoPE (Proportional Rotary) |
π Key Features
- Architecture: Gemma 4 Mixture-of-Experts (A4B).
- Expert Precision: Custom Importance Matrix (i-matrix) calibrated on wikitext-2 with 94-99% expert activation coverage to ensure zero routing-gate collapse.
- Context Stability: Native support for Proportional RoPE scaling for ultra-long context window retention.
- Workstation Optimized: Manually forged on a dual-GPU NVIDIA RTX 3090 + A4000 setup to ensure production-grade reliability and 24GB VRAM compatibility.
This repository contains the Platinum Series universal GGUF release of Gemma-4-26B-MoE. MoE models with high expert counts are fragile; these quants use an i-matrix to shield the critical routing pathways. This ensures that even at lower bit-rates, the model maintains the reasoning depth of the 26B engine while operating with the speed of a 4B parameter model.
π¦ Available Files & Quantization Details
| File | Method | Description |
|---|---|---|
| IQ3_M | i-matrix | Efficiency King. Smallest viable size for high-speed mobile/NPU deployment. |
| Q4_K_M | k-quant | Balanced Standard. Recommended for most general-purpose logic tasks. |
| IQ4_XS | i-matrix | The MoE Gold Standard. (~15.8 GB) Optimized for the RTX 3090 with expert protection. |
| Q5_K_M | k-quant | Platinum Tier. High-fidelity reasoning with minimal perplexity loss. |
| Q6_K | k-quant | Near-lossless expert routing for complex document analysis. |
| Q8_0 | block-quant | Reference Grade. Maximum fidelity to the BF16 master weights. |
π οΈ Usage (llama-cli)
To utilize the 128-expert routing and p-RoPE scaling, use the latest build of llama.cpp :
./llama-cli -m Gemma-4-26B-MoE-IQ4_XS.gguf -n 512 --flash-attn --ctx-size 8192 -ngl 99
π Python Inference (llama-cpp-python)
To run these engines using the provided python script :
from llama_cpp import Llama
# Optimized for 24GB VRAM (RTX 3090)
llm = Llama(
model_path="./Gemma-4-26B-MoE-IQ4_XS.gguf",
n_gpu_layers=-1, # Offload 128 experts to VRAM
n_ctx=16384, # High-speed context window
use_mlock=True # Pin memory for expert routing stability
)
output = llm(
"<|turn|>user\nAnalyze the structural efficiency of 128-experts in this MoE model.<|turn|>model\n",
max_tokens=1024,
stop=["<turn|>", "<|file_separator|>"]
)
print(output['choices'][0]['text'])
π» For C# / .NET Users (LLamaSharp)
Fully compatible with .NET applications via the csharp script and the LLamaSharp library.
using LLama.Common;
using LLama;
var parameters = new ModelParams("Gemma-4-26B-MoE-IQ4_XS.gguf")
{
ContextSize = 16384,
GpuLayerCount = -1, // Distribute experts across RTX 3090 + A4000
TensorSplit = { 1.5f, 1.0f } // Balanced split for dual-GPU Noida Forge setup
};
using var weights = LLamaWeights.LoadFromFile(parameters);
using var context = weights.CreateContext(parameters);
var executor = new InteractiveExecutor(context);
var chatHistory = new ChatHistory();
chatHistory.AddMessage(AuthorRole.System, "You are a helpful assistant.");
var session = new ChatSession(executor, chatHistory);
await foreach (var text in session.ChatAsync(new ChatHistory.Message(AuthorRole.User, "Validate the architectural integrity of this project."), new InferenceParams { MaxTokens = 2048 }))
{
Console.Write(text);
}
ποΈ Hardware Requirements
- RTX 3090 / 4090: Recommended for full offloading of Q4_K_M through Q6_K variants.
- System RAM: 32GB+ for model loading and initial calibration.
- Storage: ~55GB required for the full GGUF collection.
β Support the Forge
Maintaining the production line for high-fidelity 128-expert models requires significant hardware resources. If these tools power your research, please consider supporting the development:
| Platform | Support Link |
|---|---|
| Global & India | Support via Razorpay |
Scan to support via UPI (India Only):
Connect with the architect: Abhishek Jaiswal on LinkedIn
- Downloads last month
- 7,372
3-bit
4-bit
5-bit
6-bit
8-bit