| --- |
| tags: |
| - ctranslate2 |
| - int8 |
| - float16 |
|
|
| license: bsd-3-clause |
| --- |
| # # Fast-Inference with Ctranslate2 |
| Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU. |
|
|
| quantized version of [Salesforce/codegen-350M-multi](https://huggingface.co/Salesforce/codegen-350M-multi) |
| ```bash |
| pip install hf-hub-ctranslate2>=2.0.8 |
| ``` |
| Converted on 2023-05-21 using |
| ``` |
| ct2-transformers-converter --model Salesforce/codegen-350M-multi --output_dir /home/michael/tmp-ct2fast-codegen-350M-multi --force --copy_files merges.txt tokenizer.json README.md tokenizer_config.json vocab.json special_tokens_map.json added_tokens.json .gitattributes --quantization float16 |
| ``` |
|
|
| Checkpoint compatible to [ctranslate2>=3.13.0](https://github.com/OpenNMT/CTranslate2) and [hf-hub-ctranslate2>=2.0.6](https://github.com/michaelfeil/hf-hub-ctranslate2) |
| - `compute_type=int8_float16` for `device="cuda"` |
| - `compute_type=int8` for `device="cpu"` |
|
|
| ```python |
| from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub |
| from transformers import AutoTokenizer |
| |
| model_name = "michaelfeil/ct2fast-codegen-350M-multi" |
| # use either TranslatorCT2fromHfHub or GeneratorCT2fromHfHub here, depending on model. |
| model = GeneratorCT2fromHfHub( |
| # load in int8 on CUDA |
| model_name_or_path=model_name, |
| device="cuda", |
| compute_type="int8_float16", |
| # tokenizer=AutoTokenizer.from_pretrained("Salesforce/codegen-350M-multi") |
| ) |
| outputs = model.generate( |
| text=["def print_hello_world():", "def hello_name(name:"], |
| max_length=64 |
| ) |
| print(outputs) |
| ``` |
|
|
| # Licence and other remarks: |
| This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo. |
|
|
| # Original description |
| |
| tags: |
| - ctranslate2 |
| - int8 |
| - float16 |
| |
| # CodeGen (CodeGen-Multi 350M) |
|
|
| ## Model description |
|
|
| CodeGen is a family of autoregressive language models for **program synthesis** from the paper: [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. The models are originally released in [this repository](https://github.com/salesforce/CodeGen), under 3 pre-training data variants (`NL`, `Multi`, `Mono`) and 4 model size variants (`350M`, `2B`, `6B`, `16B`). |
|
|
| The checkpoint included in this repository is denoted as **CodeGen-Multi 350M** in the paper, where "Multi" means the model is initialized with *CodeGen-NL 350M* and further pre-trained on a dataset of multiple programming languages, and "350M" refers to the number of trainable parameters. |
|
|
| ## Training data |
|
|
| This checkpoint (CodeGen-Multi 350M) was firstly initialized with *CodeGen-NL 350M*, and then pre-trained on [BigQuery](https://console.cloud.google.com/marketplace/details/github/github-repos), a large-scale dataset of multiple programming languages from GitHub repositories. The data consists of 119.2B tokens and includes C, C++, Go, Java, JavaScript, and Python. |
|
|
| ## Training procedure |
|
|
| CodeGen was trained using cross-entropy loss to maximize the likelihood of sequential inputs. |
| The family of models are trained using multiple TPU-v4-512 by Google, leveraging data and model parallelism. |
| See Section 2.3 of the [paper](https://arxiv.org/abs/2203.13474) for more details. |
|
|
| ## Evaluation results |
|
|
| We evaluate our models on two code generation benchmark: HumanEval and MTPB. Please refer to the [paper](https://arxiv.org/abs/2203.13474) for more details. |
|
|
|
|
| ## Intended Use and Limitations |
|
|
| As an autoregressive language model, CodeGen is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them. |
| However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well. |
|
|
| ## How to use |
|
|
| This model can be easily loaded using the `AutoModelForCausalLM` functionality: |
|
|
| ```python |
| from transformers import AutoTokenizer, AutoModelForCausalLM |
| tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-350M-multi") |
| model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-350M-multi") |
| |
| text = "def hello_world():" |
| input_ids = tokenizer(text, return_tensors="pt").input_ids |
| |
| generated_ids = model.generate(input_ids, max_length=128) |
| print(tokenizer.decode(generated_ids[0], skip_special_tokens=True)) |
| ``` |
|
|
| ## BibTeX entry and citation info |
|
|
| ```bibtex |
| @article{Nijkamp2022ACP, |
| title={A Conversational Paradigm for Program Synthesis}, |
| author={Nijkamp, Erik and Pang, Bo and Hayashi, Hiroaki and Tu, Lifu and Wang, Huan and Zhou, Yingbo and Savarese, Silvio and Xiong, Caiming}, |
| journal={arXiv preprint}, |
| year={2022} |
| } |
| ``` |
|
|