Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's not just LLM sourced though, folks have literally tried this after the release with the 26A4B model and it wasn't very good. Maybe the dense ~31B model is worthwhile though.
 help



Many Gemma implementations are or were broken on launch day. The first attempts to fix llama.cpp’s tokenizer were merged hours ago.

Everyone hated Qwen3.5 at launch too because so many implementations were broken and couldn’t do tool calling.

You need to ignore social media “I tried this and it sucks” echo chambers for new model releases.


I agree with your criticism. I should have simply said that I had good results with gemma 4 tool use, and agentic coding with gemma 4 didn’t yet work well for me.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: