This model has some serious quality issues. It's not broken and still chat but it significantly underperforms the original model on complicated tasks.
This is LFM2.5-1.2B-Thinking quantized with llm-compressor to NVFP4. The model is compatible with vLLM (tested: v0.14.0). Tested with an L4 (Google Colab).
- Developed by: The Kaitchup
- License: lfm1.0
- Downloads last month
- -
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support
Model tree for kaitchup/LFM2.5-1.2B-Thinking-NVFP4
Base model
LiquidAI/LFM2.5-1.2B-Base Finetuned
LiquidAI/LFM2.5-1.2B-Thinking