-
Qwen/Qwen2.5-Coder-32B-Instruct
Text Generation • 33B • Updated • 752k • • 2k -
mistralai/Mistral-Small-24B-Instruct-2501
Updated • 334k • 954 -
AlSamCur123/Mistral-Small3-24B-Instruct
24B • Updated • 59 • 1 -
Qwen/Qwen2.5-Coder-14B-Instruct
Text Generation • 15B • Updated • 502k • • 144
MindOfJay
MindOfJay
·
AI & ML interests
programming, developer tools, local models, edge computing, multiple small models in a trenchcoat
Organizations
None yet
Tools
-
sphiratrioth666/SillyTavern-Presets-Sphiratrioth
Updated • 261 -
deepseek-ai/DeepSeek-R1-Distill-Qwen-14B
Text Generation • 15B • Updated • 779k • • 613 -
deepseek-ai/DeepSeek-R1-Distill-Llama-8B
Text Generation • Updated • 775k • • 848 - Running501
LLM Model VRAM Calculator
📈501Calculate VRAM needed to run LLMs on your GPU
todo
-
Qwen/Qwen2.5-Coder-32B-Instruct
Text Generation • 33B • Updated • 752k • • 2k -
mistralai/Mistral-Small-24B-Instruct-2501
Updated • 334k • 954 -
AlSamCur123/Mistral-Small3-24B-Instruct
24B • Updated • 59 • 1 -
Qwen/Qwen2.5-Coder-14B-Instruct
Text Generation • 15B • Updated • 502k • • 144
Tools
-
sphiratrioth666/SillyTavern-Presets-Sphiratrioth
Updated • 261 -
deepseek-ai/DeepSeek-R1-Distill-Qwen-14B
Text Generation • 15B • Updated • 779k • • 613 -
deepseek-ai/DeepSeek-R1-Distill-Llama-8B
Text Generation • Updated • 775k • • 848 - Running501
LLM Model VRAM Calculator
📈501Calculate VRAM needed to run LLMs on your GPU
models 0
None public yet
datasets 0
None public yet