🌌 Noir-Ultra (7B)

Noir-Ultra is the elite 7-billion parameter model of the Noir series. It represents a breakthrough in training efficiency: where previous 7B iterations required 6 epochs to reach stability, Noir-Ultra achieved superior results in just a single epoch.

This model is a "compact titan," delivering scientific reasoning and mathematical accuracy that rival much larger architectures.


🚀 The Ultra Advantage

  • 🧬 Unrivaled STEM: With a 91.0% score on SciQ, it is a specialized tool for scientific inquiry.
  • 📐 Mathematical Precision: Scoring 84.0% on GSM8K, it handles complex chains of thought with ease.
  • 🧠 Logical Depth: An ARC-Challenge score of 86.0% places it at the top of its weight class for reasoning tasks.

📊 Evaluation Dashboard (Noir Ultra Report)

Based on the latest evaluation, Noir-Ultra shows an exceptionally strong profile in technical domains:

Domain Benchmark Result (%) Status
STEM SciQ 91.0% 🏆 Master
Logic ARC-C 86.0% 🔥 Elite
Math GSM8K 84.0% ✅ Advanced
Medicine MedQA 65.0% 🩺 Competent
Physics MMLU-Physics 70.0% 🧪 Specialist

📦 Noir Model Family Matrix

Model Parameters Role Key Strength
Noir-Lightning 0.5B The Pocket Assistant Ultra-fast, runs on anything
Noir-Mini 1.5B The Balanced Thinker High speed with solid grammar
Noir-Standard 3B The Versatile Workhorse 65% GSM8K, perfect for 8GB VRAM
Noir-Ultra 7B The Reasoning Master 91% SciQ & 84% Math
Noir-Starlight 14B The Galactic Intelligence Deep logic & Expert-level STEM

🛠 Quick Start (Transformers)

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_name = "muverqqw/Noir-Ultra" 
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name, 
    load_in_4bit=True, # Recommended for 8GB VRAM
    device_map="auto"
)

👤 Credits

  • Developer: IceL1ghtning

  • Architecture: Qwen 2.5 (7B)

  • Release: 2026

  • License: Apache 2.0

Efficiency meets intelligence. Built with passion for the open-source community.
Downloads last month
276
Safetensors
Model size
8B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for muverqqw/Noir-Ultra

Base model

Qwen/Qwen2.5-7B
Finetuned
(2377)
this model
Quantizations
1 model

Collection including muverqqw/Noir-Ultra

Evaluation results