metadata
base_model:
- google/gemma-4-E2B-it
gemma-4-E2B-it-GGUF
Recommended way to run this model:
llama-server -hf ggml-org/gemma-4-E2B-it-GGUF
Then, access http://localhost:8080
base_model:
- google/gemma-4-E2B-it
Recommended way to run this model:
llama-server -hf ggml-org/gemma-4-E2B-it-GGUF
Then, access http://localhost:8080