Noir
Collection
Our text model is woven from miracles, magic and dreams. • 8 items • Updated • 2
Noir-Ultra is the elite 7-billion parameter model of the Noir series. It represents a breakthrough in training efficiency: where previous 7B iterations required 6 epochs to reach stability, Noir-Ultra achieved superior results in just a single epoch.
This model is a "compact titan," delivering scientific reasoning and mathematical accuracy that rival much larger architectures.
Based on the latest evaluation, Noir-Ultra shows an exceptionally strong profile in technical domains:
| Domain | Benchmark | Result (%) | Status |
|---|---|---|---|
| STEM | SciQ | 91.0% | 🏆 Master |
| Logic | ARC-C | 86.0% | 🔥 Elite |
| Math | GSM8K | 84.0% | ✅ Advanced |
| Medicine | MedQA | 65.0% | 🩺 Competent |
| Physics | MMLU-Physics | 70.0% | 🧪 Specialist |
| Model | Parameters | Role | Key Strength |
|---|---|---|---|
| Noir-Lightning | 0.5B | The Pocket Assistant | Ultra-fast, runs on anything |
| Noir-Mini | 1.5B | The Balanced Thinker | High speed with solid grammar |
| Noir-Standard | 3B | The Versatile Workhorse | 65% GSM8K, perfect for 8GB VRAM |
| Noir-Ultra | 7B | The Reasoning Master | 91% SciQ & 84% Math |
| Noir-Starlight | 14B | The Galactic Intelligence | Deep logic & Expert-level STEM |
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "muverqqw/Noir-Ultra"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
load_in_4bit=True, # Recommended for 8GB VRAM
device_map="auto"
)
Developer: IceL1ghtning
Architecture: Qwen 2.5 (7B)
Release: 2026
License: Apache 2.0