mradermacher/Qwen3.6-35B-A3B-Claude-4.6-Opus-Reasoning-Distilled-i1-GGUF Image-Text-to-Text • 35B • Updated 36 minutes ago • 1
view post Post 4862 We made a guide on how to run open LLMs in Claude Code, Codex and OpenClaw.Use Gemma 4 and Qwen3.6 GGUFs for local agentic coding on 24GB RAMRun with self-healing tool calls, code execution, web search via the Unsloth API endpoint and llama.cppGuide: https://unsloth.ai/docs/basics/api See translation 🔥 19 19 ❤️ 2 2 + Reply