ailexleon/Assistant_Pepe_70B-mlx-4Bit
Converted to MLX format from SicariusSicariiStuff/Assistant_Pepe_70B using mlx-lm version 0.31.1.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("ailexleon/Assistant_Pepe_70B-mlx-4Bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, return_dict=False,
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
- Downloads last month
- 327
Model size
71B params
Tensor type
F16
·
U32 ·
Hardware compatibility
Log In to add your hardware
4-bit
Model tree for ailexleon/Assistant_Pepe_70B-mlx-4Bit
Base model
meta-llama/Llama-3.1-70B Finetuned
meta-llama/Llama-3.1-70B-Instruct Finetuned
SicariusSicariiStuff/Assistant_Pepe_70B