HuggingFaceH4/CodeAlpaca_20K
Viewer β’ Updated β’ 20k β’ 8.6k β’ 108
How to use mrm8488/mamba-coder with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="mrm8488/mamba-coder")
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe(messages) # Load model directly
from transformers import AutoModel
model = AutoModel.from_pretrained("mrm8488/mamba-coder", dtype="auto")How to use mrm8488/mamba-coder with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "mrm8488/mamba-coder"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "mrm8488/mamba-coder",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker model run hf.co/mrm8488/mamba-coder
How to use mrm8488/mamba-coder with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "mrm8488/mamba-coder" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "mrm8488/mamba-coder",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "mrm8488/mamba-coder" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "mrm8488/mamba-coder",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'How to use mrm8488/mamba-coder with Docker Model Runner:
docker model run hf.co/mrm8488/mamba-coder
Mamba is a new state space model architecture showing promising performance on information-dense data such as language modeling, where previous subquadratic models fall short of Transformers. It is based on the line of progress on structured state space models, with an efficient hardware-aware design and implementation in the spirit of FlashAttention.
CodeAlpaca_20K: contains 20K instruction-following data used for fine-tuning the Code Alpaca model.
pip install torch==2.1.0 transformers==4.35.0 causal-conv1d==1.0.0 mamba-ssm==1.0.1
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from mamba_ssm.models.mixer_seq_simple import MambaLMHeadModel
CHAT_TEMPLATE_ID = "HuggingFaceH4/zephyr-7b-beta"
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model_name = "mrm8488/mamba-coder"
eos_token = "<|endoftext|>"
tokenizer = AutoTokenizer.from_pretrained(model_name)
tokenizer.eos_token = eos_token
tokenizer.pad_token = tokenizer.eos_token
tokenizer.chat_template = AutoTokenizer.from_pretrained(CHAT_TEMPLATE_ID).chat_template
model = MambaLMHeadModel.from_pretrained(
model_name, device=device, dtype=torch.float16)
messages = []
prompt = "Write a bash script to remove .tmp files"
messages.append(dict(role="user", content=prompt))
input_ids = tokenizer.apply_chat_template(
messages, return_tensors="pt", add_generation_prompt=True
).to(device)
out = model.generate(
input_ids=input_ids,
max_length=2000,
temperature=0.9,
top_p=0.7,
eos_token_id=tokenizer.eos_token_id,
)
decoded = tokenizer.batch_decode(out)
assistant_message = (
decoded[0].split("<|assistant|>\n")[-1].replace(eos_token, "")
)
print(assistant_message)
git clone https://github.com/mrm8488/mamba-chat.git
cd mamba-chat
pip install -r requirements.txt
pip install -q gradio==4.8.0
python app.py \
--model mrm8488/mamba-coder \
--share
Coming soon!
@misc {manuel_romero_2024,
author = { {Manuel Romero} },
title = { mamba-coder (Revision 214a13a) },
year = 2024,
url = { https://huggingface.co/mrm8488/mamba-coder },
doi = { 10.57967/hf/1673 },
publisher = { Hugging Face }
}
Thanks to mamba-chat for heavily inspiring our work