Gemma-4 E4B Gemini 3.1 Pro Reasoning Distill - GGUF
GGUF quantized versions of Ayodele01/gemma-4-E4B-Gemini-3.1-Pro-Reasoning-Distill.
Model Description
This is Google's Gemma-4 4B (E4B) instruction-tuned model, fine-tuned on Gemini 3.1 Pro reasoning datasets to improve chain-of-thought reasoning capabilities.
Training Data
- Roman1111111/gemini-3.1-pro-hard-high-reasoning
- Roman1111111/gemini-3-pro-10000x-hard-high-reasoning
Training Configuration
- Base Model: google/gemma-4b-it (via unsloth/gemma-4-E4B-it)
- Method: LoRA fine-tuning
- LoRA Config: r=8, alpha=8, dropout=0.1
- Learning Rate: 5e-5
- Epochs: 0.5
- Framework: Unsloth + TRL
Available Quantizations
| Filename | Quant Type | Size | BPW | Description |
|---|---|---|---|---|
gemma-4-E4B-Gemini-3.1-Pro-Reasoning-Distill-F16.gguf |
F16 | 15GB | 16.0 | Full precision, best quality |
gemma-4-E4B-Gemini-3.1-Pro-Reasoning-Distill-Q8_0.gguf |
Q8_0 | 8GB | 8.5 | High quality, good for most use cases |
gemma-4-E4B-Gemini-3.1-Pro-Reasoning-Distill-Q5_K_M.gguf |
Q5_K_M | 5.8GB | 6.1 | Balanced quality/size, recommended |
gemma-4-E4B-Gemini-3.1-Pro-Reasoning-Distill-Q4_K_M.gguf |
Q4_K_M | 5.3GB | 5.0 | Good quality, smaller size |
gemma-4-E4B-Gemini-3.1-Pro-Reasoning-Distill-Q3_K_M.gguf |
Q3_K_M | 4.6GB | 5.1 | Smallest, for constrained devices |
Usage with llama.cpp
# Download a quantized model
wget https://huggingface.co/Ayodele01/gemma-4-E4B-Gemini-3.1-Pro-Reasoning-Distill-GGUF/resolve/main/gemma-4-E4B-Gemini-3.1-Pro-Reasoning-Distill-Q8_0.gguf
# Run with llama.cpp
./llama-cli -m gemma-4-E4B-Gemini-3.1-Pro-Reasoning-Distill-Q8_0.gguf \
-p "What is the sum of all prime numbers between 1 and 50?" \
-n 512
Usage with Ollama
Create a Modelfile:
FROM ./gemma-4-E4B-Gemini-3.1-Pro-Reasoning-Distill-Q8_0.gguf
TEMPLATE """<bos><start_of_turn>user
{{ .Prompt }}<end_of_turn>
<start_of_turn>model
"""
PARAMETER stop "<end_of_turn>"
PARAMETER temperature 0.7
Then:
ollama create gemma4-reasoning -f Modelfile
ollama run gemma4-reasoning
Recommended Settings
- Temperature: 0.7
- Top-P: 0.9
- Context Length: 8192 (model supports up to 128K)
- Repeat Penalty: 1.1
License
This model is released under the Gemma License.
Related Models
- Downloads last month
- 1,296
Hardware compatibility
Log In to add your hardware
3-bit
4-bit
5-bit
8-bit
16-bit