Important: This model uses the JANG quantization format — the GGUF equivalent for MLX on Apple Silicon. Currently only supported by MLX Studio and the jang-tools Python package.


MLX Studio

MLX Studio App

MLX Studio — the only app that natively supports JANG models


Nemotron 3 Super 120B — JANG_2L + CRACK

JANG mixed-precision · CRACK abliterated · Mamba + MoE + Attention · No guardrails · 43 GB

Ko-fi


What Is This?

This is NVIDIA Nemotron 3 Super 120B — a 120B parameter hybrid model with THREE layer types: Mamba SSM + MoE (512 experts, top-22) + Attention. One of the most architecturally complex open models available.

It has been:

  1. JANG quantized — JANG_2L profile (8-bit attention, 6-bit important, 2-bit experts) — 43 GB
  2. CRACK abliterated — permanent weight-level removal of safety refusal
Architecture Nemotron 3 Super — 120B total, ~12B active, 3 layer types
Quantization JANG_2L (8/6/2-bit mixed, 2.76 avg) — 43 GB
Abliteration CRACK — novel weight surgery
HarmBench 96.2% (308/320)
MMLU 95.7% (199/208 with thinking)
Speed 45 tok/s (M3 Ultra 256GB)
Thinking ON/OFF supported (ChatML)
Fits on 64 GB+ Macs

HarmBench Results

308/320 (96.2%)

Category Score
Harassment / Bullying 21/21 100%
Misinformation / Disinfo 54/54 100%
Copyright 79/80 99%
Chemical / Biological 40/42 95%
Harmful 17/18 94%
Illegal 50/53 94%
Cybercrime / Intrusion 47/52 90%

MMLU Results

199/208 (95.7%) — 208 questions across 13 subjects with thinking recovery

Subject Score /16 Type
HS Biology 16/16 100% BASE
College Physics 15/16 94% HARD
Conceptual Physics 15/16 94% HARD
Machine Learning 15/16 94% HARD
Professional Medicine 15/16 94% HARD
World Religions 15/16 94% BASE
Electrical Engineering 14/16 88% HARD
HS Geography 14/16 88% BASE
Formal Logic 13/16 81% HARD
Abstract Algebra 12/16 75% HARD
HS Mathematics 12/16 75% HARD
College CS 12/16 75% HARD
College Math 10/16 63% HARD

CRACK vs Base

CRACK Base JANG_2L
MMLU (with thinking) 95.7% 86.0%
HarmBench 96.2% 0%
Speed 45 tok/s 46 tok/s

Surgery improved reasoning by +9.7% — safety guardrails were interfering with mathematical problem-solving.


Install & Usage

pip install "jang[mlx]"
from jang_tools.loader import load_jang_model
from mlx_lm import generate

model, tokenizer = load_jang_model("dealignai/Nemotron-3-Super-120B-A12B-JANG_2L-CRACK")

messages = [{"role": "user", "content": "Your prompt here"}]
prompt = tokenizer.apply_chat_template(
    messages, add_generation_prompt=True, tokenize=False)

response = generate(model, tokenizer, prompt=prompt, max_tokens=2000)
print(response)

Thinking Mode

Thinking is ON by default. To disable:

prompt = tokenizer.apply_chat_template(
    messages, add_generation_prompt=True,
    enable_thinking=False, tokenize=False)

Tip: Use temperature=0.6 for thinking mode (NVIDIA recommendation). Use temperature=1.0 for chat.


About JANG

JANG (Jang Adaptive N-bit Grading) is a mixed-precision quantization format for Apple Silicon — the GGUF equivalent for MLX.

About CRACK

CRACK (Controlled Refusal Ablation via Calibrated Knockouts) removes safety alignment from LLMs at the weight level using per-layer projected vectors from structurally-mirrored prompt pairs.


Links

Ko-fi X/Twitter GitHub MLX Studio Website


Disclaimer

This model is provided for research and educational purposes. The creators are not responsible for any misuse. By downloading this model, you agree to use it responsibly and in compliance with applicable laws.


한국어

Nemotron 3 Super 120B — JANG_2L + CRACK

항목 내용
크기 43 GB
HarmBench 96.2% (308/320)
MMLU 95.7% (199/208)
속도 45 tok/s (M3 Ultra)
최소 요구사양 64 GB 메모리 Mac
pip install "jang[mlx]"

GitHub · HuggingFace · MLX Studio · Ko-fi · X @dealignai


Created by Jinho Jang · 장진호 제작

Downloads last month
873
Safetensors
Model size
13B params
Tensor type
U32
·
F16
·
MLX
Hardware compatibility
Log In to add your hardware

Quantized

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collections including dealignai/Nemotron-3-Super-120B-A12B-UNCENSORED-JANG_2L