| --- |
| language: |
| - en |
| pipeline_tag: text-generation |
| license: mit |
| base_model: microsoft/Phi-3-mini-128k-instruct |
| --- |
| |
| # Phi-3-mini-128k-instruct-quantized.w8a16 |
|
|
| ## Model Overview |
| - **Model Architecture:** Phi-3 |
| - **Input:** Text |
| - **Output:** Text |
| - **Model Optimizations:** |
| - **Weight quantization:** INT8 |
| - **Intended Use Cases:** Intended for commercial and research use in English. Similarly to [Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct), this models is intended for assistant-like chat. |
| - **Out-of-scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. |
| - **Release Date:** 7/11/2024 |
| - **Version:** 1.0 |
| - **License(s):** [MIT](https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/mit.md) |
| - **Model Developers:** Neural Magic |
|
|
| Quantized version of [Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct), a 3.8 billion-parameter open model trained using the Phi-3 datasets. |
| It achieves an average score of 69.53 on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves 69.59. |
|
|
| ### Model Optimizations |
|
|
| This model was obtained by quantizing the weights of [Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) to INT8 data type. |
| This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%. |
|
|
| Only the weights of the linear operators within transformers blocks are quantized. Symmetric per-channel quantization is applied, in which a linear scaling per output dimension maps the INT8 and floating point representations of the quantized weights. |
| The [GPTQ](https://arxiv.org/abs/2210.17323) algorithm is applied for quantization, as implemented in the [llm-compressor](https://github.com/vllm-project/llm-compressor) library. GPTQ used a 1% damping factor and 256 sequences of 8,192 random tokens. |
|
|
|
|
| ## Deployment |
|
|
| ### Use with vLLM |
|
|
| This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below. |
|
|
| ```python |
| from vllm import LLM, SamplingParams |
| from transformers import AutoTokenizer |
| |
| model_id = "neuralmagic/Phi-3-mini-128k-instruct-quantized.w8a16" |
| number_gpus = 1 |
| |
| sampling_params = SamplingParams(temperature=0.6, top_p=0.9, max_tokens=256) |
| |
| tokenizer = AutoTokenizer.from_pretrained(model_id) |
| |
| messages = [ |
| {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, |
| {"role": "user", "content": "Who are you?"}, |
| ] |
| |
| prompts = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False) |
| |
| llm = LLM(model=model_id, trust_remote_code=True, max_model_len=8196, tensor_parallel_size=number_gpus) |
| |
| outputs = llm.generate(prompts, sampling_params) |
| |
| generated_text = outputs[0].outputs[0].text |
| print(generated_text) |
| ``` |
|
|
|
|
| vLLM aslo supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details. |
|
|
| ### Use with transformers |
|
|
| The following example contemplates how the model can be deployed in Transformers using the `generate()` function. |
|
|
| ```python |
| from transformers import AutoTokenizer, AutoModelForCausalLM |
| |
| model_id = "neuralmagic/Phi-3-mini-128k-instruct-quantized.w8a16" |
| |
| tokenizer = AutoTokenizer.from_pretrained(model_id) |
| model = AutoModelForCausalLM.from_pretrained( |
| model_id, |
| torch_dtype="auto", |
| device_map="auto", |
| trust_remote_code=True, |
| ) |
| |
| messages = [ |
| {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, |
| {"role": "user", "content": "Who are you?"}, |
| ] |
| |
| input_ids = tokenizer.apply_chat_template( |
| messages, |
| add_generation_prompt=True, |
| return_tensors="pt" |
| ).to(model.device) |
| |
| outputs = model.generate( |
| input_ids, |
| max_new_tokens=256, |
| do_sample=True, |
| temperature=0.6, |
| top_p=0.9, |
| ) |
| response = outputs[0][input_ids.shape[-1]:] |
| print(tokenizer.decode(response, skip_special_tokens=True)) |
| ``` |
|
|
| ## Creation |
|
|
| This model was created by using the [llm-compressor](https://github.com/vllm-project/llm-compressor) library as presented in the code snipet below. |
|
|
| ```python |
| from transformers import AutoTokenizer |
| from datasets import Dataset |
| from llmcompressor.transformers import SparseAutoModelForCausalLM, oneshot |
| from llmcompressor.modifiers.quantization import GPTQModifier |
| import random |
| |
| model_id = "microsoft/Phi-3-mini-128k-instruct" |
| |
| num_samples = 256 |
| max_seq_len = 8192 |
| |
| tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True) |
| |
| max_token_id = len(tokenizer.get_vocab()) - 1 |
| input_ids = [[random.randint(0, max_token_id) for _ in range(max_seq_len)] for _ in range(num_samples)] |
| attention_mask = num_samples * [max_seq_len * [1]] |
| ds = Dataset.from_dict({"input_ids": input_ids, "attention_mask": attention_mask}) |
| |
| recipe = GPTQModifier( |
| targets="Linear", |
| scheme="W8A16", |
| ignore=["lm_head"], |
| dampening_frac=0.01, |
| ) |
| |
| model = SparseAutoModelForCausalLM.from_pretrained( |
| model_id, |
| device_map="auto", |
| trust_remote_code=True, |
| ) |
| |
| oneshot( |
| model=model, |
| dataset=ds, |
| recipe=recipe, |
| max_seq_length=max_seq_len, |
| num_calibration_samples=num_samples, |
| tokenizer=tokenizer, |
| ) |
| |
| model.save_pretrained("Phi-3-mini-128k-instruct-quantized.w8a16") |
| ``` |
|
|
|
|
|
|
| ## Evaluation |
|
|
| The model was evaluated on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) leaderboard tasks (version 1) with the [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/383bbd54bc621086e05aa1b030d8d4d5635b25e6) (commit 383bbd54bc621086e05aa1b030d8d4d5635b25e6) and the [vLLM](https://docs.vllm.ai/en/stable/) engine, using the following command: |
| ``` |
| lm_eval \ |
| --model vllm \ |
| --model_args pretrained="neuralmagic/Phi-3-mini-128k-instruct-quantized.w8a16",dtype=auto,gpu_memory_utilization=0.4,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1 \ |
| --tasks openllm \ |
| --batch_size auto |
| ``` |
|
|
| ### Accuracy |
|
|
| #### Open LLM Leaderboard evaluation scores |
| <table> |
| <tr> |
| <td><strong>Benchmark</strong> |
| </td> |
| <td><strong>Phi-3-mini-128k-instruct </strong> |
| </td> |
| <td><strong>Phi-3-mini-128k-instruct-quantized.w8a16(this model)</strong> |
| </td> |
| <td><strong>Recovery</strong> |
| </td> |
| </tr> |
| <tr> |
| <td>MMLU (5-shot) |
| </td> |
| <td>69.36 |
| </td> |
| <td>69.33 |
| </td> |
| <td>99.9% |
| </td> |
| </tr> |
| <tr> |
| <td>ARC Challenge (25-shot) |
| </td> |
| <td>63.23 |
| </td> |
| <td>63.23 |
| </td> |
| <td>100.0% |
| </td> |
| </tr> |
| <tr> |
| <td>GSM-8K (5-shot, strict-match) |
| </td> |
| <td>76.65 |
| </td> |
| <td>76.19 |
| </td> |
| <td>99.4% |
| </td> |
| </tr> |
| <tr> |
| <td>Hellaswag (10-shot) |
| </td> |
| <td>79.64 |
| </td> |
| <td>79.52 |
| </td> |
| <td>99.8% |
| </td> |
| </tr> |
| <tr> |
| <td>Winogrande (5-shot) |
| </td> |
| <td>74.27 |
| </td> |
| <td>74.35 |
| </td> |
| <td>100.9% |
| </td> |
| </tr> |
| <tr> |
| <td>TruthfulQA (0-shot) |
| </td> |
| <td>54.42 |
| </td> |
| <td>54.19 |
| </td> |
| <td>99.6% |
| </td> |
| </tr> |
| <tr> |
| <td><strong>Average</strong> |
| </td> |
| <td><strong>69.59</strong> |
| </td> |
| <td><strong>69.53</strong> |
| </td> |
| <td><strong>99.9%</strong> |
| </td> |
| </tr> |
| </table> |