In r/LocalLLaMA, builders compare Gemma 4 against Qwen for agentic coding and tool calling, with Gemma praised for speed and potential but criticized for inference bugs and looping, pushing some to stick with Qwen until settings and runtimes mature.
Comparing Qwen3.5 vs Gemma4 for Local Agentic Coding
I've been told qwen 3 coder next was the king, and while its good, the 4bit variant always put my system near the edge.
I'd read that Qwen3.5-27b was still better at coding than Gemma-4, so this is great news!
Cant wait for all the issues to be fixed and some good agentic coding settings to be released because I think Gemma 4 31b will be really good when its properly setup. Until then I will stick to qwen 3 coder next.
I'm still seeing inference bugs (random typos, not closing the think tag, getting stuck generating 15K tokens in an agentic task) in latest LM Studio beta with the latest (2.11.0) runtime (llama.cpp commit 277ff5f).
Last few days ive been trying different models and quants on my rtx 3090 LM studio , but every single one always glitches the tool calling , infinite loop that doesnt stop.
I had great success with tool calling in qwen3.5 moe model
This finding is one of many signals tracked across Artificial Intelligence. The live feed updates every few hours with new authority voices, debates, and emerging ideas.
← Back to Artificial Intelligence