Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -8,20 +8,43 @@ base_model: Qwen/Qwen3-Coder-Next
|
|
| 8 |
|
| 9 |
# Qwen3-Coder-Next-GGUF
|
| 10 |
|
| 11 |
-
This
|
| 12 |
|
| 13 |
-
##
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
- Updated: 2026-02-08 22:57:54
|
| 17 |
-
- Progress: 0/12
|
| 18 |
-
- Completed quants: None yet
|
| 19 |
-
- Remaining quants: Q2_K, Q3_K_S, Q3_K_M, Q3_K_L, Q4_0, Q4_K_S, Q4_K_M, Q5_0, Q5_K_S, Q5_K_M, Q6_K, Q8_0
|
| 20 |
|
| 21 |
## Ollama Support
|
| 22 |
Full Ollama support is provided by merging any sharded GGUF output into a single file after quantization.
|
| 23 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 24 |
---
|
| 25 |
-
*This README is temporary and will be replaced when conversion completes.*
|
| 26 |
*Converted automatically by [GGUF Forge](https://gguforge.com) v5.8*
|
| 27 |
|
|
|
|
| 8 |
|
| 9 |
# Qwen3-Coder-Next-GGUF
|
| 10 |
|
| 11 |
+
This model was converted to GGUF format from [`Qwen/Qwen3-Coder-Next`](https://huggingface.co/Qwen/Qwen3-Coder-Next) using GGUF Forge.
|
| 12 |
|
| 13 |
+
## Quants
|
| 14 |
+
The following quants are available:
|
| 15 |
+
Q3_K_S, Q2_K, Q3_K_M, Q3_K_L, Q4_0, Q4_K_S, Q4_K_M, Q5_0, Q5_K_S, Q5_K_M, Q6_K, Q8_0
|
|
|
|
|
|
|
|
|
|
|
|
|
| 16 |
|
| 17 |
## Ollama Support
|
| 18 |
Full Ollama support is provided by merging any sharded GGUF output into a single file after quantization.
|
| 19 |
|
| 20 |
+
## Conversion Stats
|
| 21 |
+
|
| 22 |
+
| Metric | Value |
|
| 23 |
+
|--------|-------|
|
| 24 |
+
| Job ID | `110acb9f-02d0-4c49-9f12-73ac6e47e8f1` |
|
| 25 |
+
| GGUF Forge Version | v5.8 |
|
| 26 |
+
| Total Time | 4.4h |
|
| 27 |
+
| Avg Time per Quant | 27.2min |
|
| 28 |
+
|
| 29 |
+
### Step Breakdown
|
| 30 |
+
- Download: 21.2min
|
| 31 |
+
- FP16 Conversion: 20.6min
|
| 32 |
+
- Quantization: 3.7h
|
| 33 |
+
|
| 34 |
+
## ๐ Convert Your Own Models
|
| 35 |
+
|
| 36 |
+
**Want to convert more models to GGUF?**
|
| 37 |
+
|
| 38 |
+
๐ **[gguforge.com](https://gguforge.com)** โ Free hosted GGUF conversion service. Login with HuggingFace and request conversions instantly!
|
| 39 |
+
|
| 40 |
+
## Links
|
| 41 |
+
|
| 42 |
+
- ๐ **Free Hosted Service**: [gguforge.com](https://gguforge.com)
|
| 43 |
+
- ๐ ๏ธ Self-host GGUF Forge: [GitHub](https://github.com/Akicuo/automaticConversion)
|
| 44 |
+
- ๐ฆ llama.cpp (quantization engine): [GitHub](https://github.com/ggerganov/llama.cpp)
|
| 45 |
+
- ๐ฌ Community & Support: [Discord](https://discord.gg/4vafUgVX3a)
|
| 46 |
+
|
| 47 |
+
|
| 48 |
---
|
|
|
|
| 49 |
*Converted automatically by [GGUF Forge](https://gguforge.com) v5.8*
|
| 50 |
|