Transformers
GGUF
English
256k context
Qwen3
Mixture of Experts
MOE
MOE Dense
2 experts
4Bx12
All use cases
bfloat16
Merge
thinking
reasoning
GPT-5.1-High-Reasoning-Distill
Gemini-3-Pro-Preview-High-Reasoning-Distill
Claude-4.5-Opus-High-Reasoning-Distill
Claude-Sonnet-4-Reasoning-Distill
Kimi-K2-Thinking-Distill
Gemini-2.5-Flash-Distill
Gemini-2.5-Flash-Lite-Preview-Distill
gpt-oss-120b-Distill
GLM-Flash-4.6-Distill
Open-R1-Distill
Command-A-Reasoning-Distill
conversational
uploaded from rich1
Browse files
.gitattributes
CHANGED
|
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
Qwen3-48B-A4B-Savant-Commander-GATED-12x-Closed-Open-Source-Distill.Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
|
Qwen3-48B-A4B-Savant-Commander-GATED-12x-Closed-Open-Source-Distill.Q2_K.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:361914845a5d8b2553e3b7f758c39167b4b76b17f4ba922f511ad08c3f388b45
|
| 3 |
+
size 12383293856
|