🦊 Fox1.4 - Reasoning Specialist

Fox1.4 is Fox1.3's successor, trained on combined data from math, logic, knowledge, and code reasoning tasks.

Performance

Custom Benchmark (10 questions):

  • βœ… All tasks: 100%
  • Penguin exception logic: βœ…
  • $1.10 riddle: βœ…
  • Math (2+2, 15+27, 100/4, 7*8): βœ…
  • Knowledge (France, Jupiter): βœ…
  • Code (is_even): βœ…

Estimated MMLU Score: ~40-50%

Architecture

  • Base Model: Qwen2.5-0.5B (merged with LoRA adapter)
  • Training: Combined data from 4 expert domains
  • Parameters: ~900M
  • Format: Full merged model (safetensors)

Usage

Ollama

ollama pull teolm30/fox1.4
ollama run fox1.4

Python

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("teolm30/fox1.4")
tokenizer = AutoTokenizer.from_pretrained("teolm30/fox1.4")

inputs = tokenizer("What is 2+2?", return_tensors="pt")
output = model.generate(**inputs, max_new_tokens=50)
print(tokenizer.decode(output[0]))
Downloads last month
166
Safetensors
Model size
0.5B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for teolm30/fox1.4

Quantizations
1 model

Space using teolm30/fox1.4 1