Burmese Coder 4B - MLX (4-bit)
Burmese Coder 4B is an instruction-tuned model based on Gemma 3 4B, specifically fine-tuned to assist with programming tasks in the Burmese language. This repository contains the model in the native MLX format, optimized for Apple Silicon (M1/M2/M3/M4).
🚀 Quick Start: LM Studio (Recommended)
This model is fully compatible with LM Studio for a seamless, local experience on macOS.
📦 Setup Instructions
Download the model:
- In LM Studio, go to the Search tab.
- Enter
WYNN747/burmese-coder-4b-mlxand click Download. - Alternatively, you can use the Local Folder import feature for this specific directory.
Configure the Inference Engine:
- Ensure you are using the LM Studio MLX backend (found in Settings > Engines & Frameworks).
- If you see a "Missing Library" error, click the Fix button in the MLX settings.
Link : https://lmstudio.ai/download
📊 Model Performance
- Quantization: 4-bit affline (quantized via
mlx-lm) - VRAM Usage: ~2.3 GB
- Inference Speed: ~60 tokens/sec (M-series Pro/Max chips)
- Primary Focus: Python, Burmese language instruction-following, and natural language explanations of code.
🐍 Python Usage (MLX)
For programmatic access, install mlx-lm:
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("WYNN747/burmese-coder-4b-mlx")
messages = [
{"role": "user", "content": "Python မှာ list တစ်ခုကို ဘယ်လို sort လုပ်ရမလဲ?"}
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
print(response)
📝 Training Details
- Base Model:
unsloth/gemma-3-4b-it - Adapter:
WYNN747/burmese-coder-4b(LoRA merged) - Dataset: Custom Burmese MBPP and HumanEval-translated datasets.
- Conversion: Merged and quantized using the MLX framework for maximum performance on macOS.
📜 License
This model is released under the Apache 2.0 license.
- Downloads last month
- 34
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support