> I asked Gemini Pro about this earlier and Gemini Pro recommended qwen 3.5 models specifically for coding, and backed that up with interesting material on training.
The Gemma models were literally released yesterday. You can’t ask LLMs for advice on these topics and get accurate information.
Please don’t repeat LLM-sourced answers as canonical information
It's not just LLM sourced though, folks have literally tried this after the release with the 26A4B model and it wasn't very good. Maybe the dense ~31B model is worthwhile though.
I agree with your criticism. I should have simply said that I had good results with gemma 4 tool use, and agentic coding with gemma 4 didn’t yet work well for me.
I spent two hours doing my own research before asking for Gemini’s analysis, which reinforced my own opinion that the gemini models historically have not been trained and target for agentic coding use.
Have you tried using the new Gemma 4 models with agentic coding tools?If you do, you might end up agreeing with me.
I wasn’t very clear, sorry. By my ‘own research’ I meant spending 90 minutes experimenting with Gemma 4 models for tool use (good results!) and a half hour using with pi and OpenCode (I didn’t get good results, yet.)
The Gemma models were literally released yesterday. You can’t ask LLMs for advice on these topics and get accurate information.
Please don’t repeat LLM-sourced answers as canonical information