Is a QAT fine-tune planned?
Gemma3-QAT was a really nice upgrade for users of quantized models. Was it a one-off experiment, or can we expect the same treatment for Gemma4 models?
Hey,
Thanks for the question and to everyone who reacted to it. We are really glad to hear that the Gemma 3 QAT models have been useful in your workflows. That effort was a great validation of the value of providing officially supported quantised checkpoints.
Our primary focus for this launch has been on stabilizing and rolling out the core base and instruction tuned models across ecosystems.
That said, community feedback plays a significant role in shaping our roadmaps and threads like this are a strong signal of interest.
For any updates, please keep an eye on our official developer channels, where we will share announcements as they become available. In the meantime, please keep the feedbacks coming, it's genuinely helpful