Qwen3-30B-A3B-YOYO-V2-Claude-4.6-Opus-High-INSTRUCT-qx86-hi-mlx

Brainwaves

          arc   arc/e boolq hswag obkqa piqa  wino
qx86-hi  0.545,0.717,0.877,0.717,0.440,0.789,0.653
qx64-hi  0.551,0.726,0.872,0.706,0.444,0.791,0.660
mxfp4    0.530,0.685,0.872,0.705,0.408,0.785,0.642

Qwen3-30B-A3B-YOYO-V2
q8-hi    0.529,0.688,0.885,0.685,0.442,0.783,0.642
qx86-hi  0.531,0.690,0.885,0.685,0.448,0.785,0.646
q6       0.532,0.685,0.886,0.683,0.456,0.782,0.639
mxfp4    0.503,0.636,0.880,0.689,0.428,0.780,0.635

nightmedia/Qwen3-30B-A3B-Element7-1M
qx86-hi  0.578,0.750,0.883,0.742,0.478,0.804,0.684

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("Qwen3-30B-A3B-YOYO-V2-Claude-4.6-Opus-High-INSTRUCT-qx86-hi-mlx")

prompt = "hello"

if tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True, return_dict=False,
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)
Downloads last month
101
Safetensors
Model size
31B params
Tensor type
BF16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for nightmedia/Qwen3-30B-A3B-YOYO-V2-Claude-4.6-Opus-High-INSTRUCT-qx86-hi-mlx

Collection including nightmedia/Qwen3-30B-A3B-YOYO-V2-Claude-4.6-Opus-High-INSTRUCT-qx86-hi-mlx