--- base_model: - google/gemma-4-E2B-it --- # gemma-4-E2B-it-GGUF Recommended way to run this model: ```sh llama-server -hf ggml-org/gemma-4-E2B-it-GGUF ``` Then, access http://localhost:8080