I'm very curious to see you Pepe'fy Gemma 4 next.
Both 31B and 26BA4B are incredibly strong assistants, the 31B actually gives SIGNIFICNATLY more useful answers to me compared to GLM 5.1 with exception of coding, still this is just wild. They're honestly so strong and for once just answer and solve your requests directly, that I doubt they even need any further finetuning at this point to do better, EXCEPT your Pepe tunes come to mind.
That's why I'm curious. Pepe significantly improved the intelligence of both 8B and 70B llama, so... Just what will happen if you apply this to already smart af gemma??? PEPA AGI CONFIRMED???
Looking forward to it! Assistant_Pepe_70B showed very good results, imagine that on top of Gemma 4!
This right here. Full vision and thinking support would be incredible.
Gemma4 tune will be problematic in the near future, currently FA2 doesn't work, this means a massive VRAM cost for tuning. Training the vision part will be a greater headache by several orders of magnitude, I know this first hand after training X-Ray_Alpha.
However, I very much am interested in tuning Gemma4 eventually. It will just take some time.
Stay tuned :)
Can you do Apertus-70B when you're at it?
Gemma4 tune will be problematic in the near future, currently FA2 doesn't work, this means a massive VRAM cost for tuning. Training the vision part will be a greater headache by several orders of magnitude, I know this first hand after training X-Ray_Alpha.
I think it will become easier as support for it matures. Qwen3.5 wasn't too much different at first.