momonga PRO
mmnga
AI & ML interests
None yet
Recent Activity
published
a model 2 days ago
mmnga-o/Holy-fox-Qwen3.5-0.8B-JP-gguf updated
a model 2 days ago
mmnga-o/Holy-fox-Qwen3.5-0.8B-JP-gguf published
a model 5 days ago
mmnga-o/CAT-Translate-7b-gguf Organizations
remove files
#1 opened 18 days ago
by
mmnga
eos_token becomes <|im_end|> after GGUF conversion with llama.cpp, and generation never terminates
#1 opened 10 months ago
by
mmnga
Update README.md
❤️ 1
1
#1 opened 11 months ago
by
kamone373
It seems completely broken
1
#1 opened about 1 year ago
by
aguspiza
Can we hire you for quantization and fine tuning?
1
#1 opened about 1 year ago
by
rafa9
Fix
1
#1 opened about 1 year ago
by
STATIKwitak
Would it be possible to have an 8bit gguf?
❤️ 1
2
#1 opened over 1 year ago
by
PurityWolf
Please use split ggufs instead of splitting files manually
❤️ 1
1
#1 opened over 1 year ago
by
lmg-anon
Usage in the model card seems to be ChatML format.
❤️ 1
1
#1 opened over 1 year ago
by
yamikumods
LM Studioでのエラー
3
#1 opened almost 2 years ago
by
alfredplpl
An idea
👍 1
1
#1 opened almost 2 years ago
by
Cran-May
Please tell me how did you convert this FAST model into gguf file.
7
#1 opened almost 2 years ago
by
wattai
Differences in output from the original model
2
#1 opened about 2 years ago
by
nitky
Librarian Bot: Add moe tag to model
#3 opened about 2 years ago
by
librarian-bot
Librarian Bot: Add moe tag to model
#1 opened about 2 years ago
by
librarian-bot
Librarian Bot: Add moe tag to model
#1 opened about 2 years ago
by
librarian-bot
Maybe a slerp or some other merge method will preserve the component experts better?
❤️ 1
3
#2 opened about 2 years ago
by
BlueNipples
Responses somewhat related to the prompt but still gibberish
2
#1 opened about 2 years ago
by
JeroenAdam
Tritonのサポート切れによるColab A100への移行
2
#2 opened over 2 years ago
by
alfredplpl
bfloat16でなくfloat16による量子化
2
#1 opened over 2 years ago
by
alfredplpl