Can you train a Gemma 4 26B A4B model? I like your models David, I like how you expand upon existing models and experiment and make improvements (I was a big huge fan of your Dark Champion Llama series where you had the creative idea of training multiple different llamas and putting it together as an MoE model, that was one of my favorites for a long time) This Gemma 4 release is really awesome - the 26B a4b model runs so smooth on my setup, I was wondering if you could work your magic on this MoE model and see if you could make improvements on that one. I know you're focused on the big one, the 31B one. But this MoE model is so awesome its hard to ignore. I like it because its fast. Its really creative, its accurate, and its super fast. Its answers it puts out, the quality is nearly that of the 70+ to 100+b parameter model LLMs, its that good already. I was wondering if maybe you could work your magic and maybe do some training on this specific model, combine some really good uncensored ones (the Bartowski one is pretty good, its what I'm currently using) Is there a way you could frankenmerge some of the best performing ones of this 26B-a4b-it? I see there's a few ongoing attempts to train this Gemma 4 on Claude Opus 4.6 datasets, can you do that? The attempts I tried were broken, but most of your releases are pretty solid (at least 80% of the time, I know its all a work in progress and its all experimenting around with stuff) but your releases are usually really pretty solid. Think you could do something with that model? Thanks David