Llama-3.3-70B-Instruct โ€” 46GB (MLX)

Mixed-precision quantized version of meta-llama/Llama-3.3-70B-Instruct optimised by baa.ai using a proprietary Black Sheep AI method.

Per-tensor bit-width allocation via advanced sensitivity analysis and budget-constrained optimisation โ€” no calibration data required.

Metrics

Metric Value
Size 46 GB
Average bits 5.6
WikiText-2 PPL (median) 4.3544
MMLU vs BF16 94.8% of BF16

Usage

from mlx_lm import load, generate

model, tokenizer = load("baa-ai/Llama-3.3-70B-Instruct-RAM-50GB-MLX")
response = generate(model, tokenizer, prompt="Hello!", max_tokens=256)
print(response)

Quantized by baa.ai

Downloads last month
90
Safetensors
Model size
71B params
Tensor type
BF16
ยท
U32
ยท
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for baa-ai/Llama-3.3-70B-Instruct-RAM-50GB-MLX

Quantized
(139)
this model

Collection including baa-ai/Llama-3.3-70B-Instruct-RAM-50GB-MLX