--- language: - my - en license: mit task_categories: - text-generation tags: - code - burmese - human-eval - programming - evaluation pretty_name: Burmese HumanEval size_categories: - n<1K --- # Burmese HumanEval: A Benchmark for Evaluating Burmese Coding Assistants ## Dataset Description - **Homepage:** [GitHub - burmese-coding-eval](https://github.com/WaiYanNyeinNaing/burmese-coding-eval) - **Paper:** Evaluating Large Language Models Trained on Code (Chen et al., 2021) - **Language(s):** Burmese (my), English (en), Python - **License:** MIT ### Dataset Summary The **Burmese HumanEval** dataset is an expansion and translation of the original OpenAI HumanEval benchmark, tailored specifically for evaluating the coding abilities of Large Language Models (LLMs) in the Burmese language. It consists of 100 programming problems designed to test functional correctness, language understanding, and problem-solving skills in Burmese contexts. This dataset pairs with the **Burmese Coding Evaluation Suite**, a novel 4-track evaluation methodology (Functional, Rubric-based, Linguistic, and Statistical) developed to robustly score Burmese programming assistants. ### Supported Tasks and Leaderboards - `text-generation`: This dataset is designed to evaluate code generation capabilities given a Burmese natural language instruction or docstring. Models are typically evaluated using the pass@k metric (functional correctness) alongside qualitative LLM-as-a-judge scoring. ### Languages The natural language instructions and docstrings are in Burmese (`my`), while the programming language for the code is Python (`en`/`python`). ## Dataset Structure ### Data Instances Each instance in the dataset represents a programming task. An example instance looks like this: ```json { "task_id": "HumanEval/0", "prompt": "```burmese\nfrom typing import List\n\ndef has_close_elements(numbers: List[float], threshold: float) -> bool:\n \"\"\" ပေးထားသော ဂဏန်းများစာရင်း (list) ထဲတွင် မည်သည့် ဂဏန်းနှစ်ခုမဆို အချင်းချင်း အကွာအဝေးသည် သတ်မှတ်ထားသော threshold ထက် နည်းပါသလားဆိုတာ စစ်ဆေးပါ။ ...", "test": "\n\nMETADATA = {\n 'author': 'jt',\n 'dataset': 'test'\n}\n\n\ndef check(candidate):\n assert candidate([1.0, 2.0, 3.9, 4.0, 5.0, 2.2], 0.3) == True\n..." } ``` ### Data Fields * `task_id`: A string indicating the task identifier, mapping to the original HumanEval ID. * `prompt`: A string containing the function signature and a well-defined docstring describing the task in Burmese. * `test`: A string containing the test cases (`check` function) to functionally evaluate the generated code. ## Dataset Creation ### Curation Rationale There is a significant lack of standardized coding evaluation benchmarks for the Burmese language. To address this, we standardized the dataset creation process by translating and expanding the canonical OpenAI HumanEval benchmark, allowing for standardized functional testing of Burmese coding models. ### Source Data #### Initial Data Collection and Normalization The originating English instructions from the OpenAI HumanEval dataset were translated, localized, and technically normalized into formal Burmese by domain experts. #### Who are the source language producers? The original dataset instances were authored by the OpenAI Codex team. The Burmese translations and adaptations were produced by Dr. Wai Yan Nyein Naing and contributors to the `burmese-coding-eval` repository. ## Considerations for Using the Data ### Social Impact of Dataset This dataset aims to democratize access to AI coding tools for Burmese speakers, encouraging the development of robust, multilingual code generation models that can reliably serve developers in Myanmar and beyond. ## Additional Information ### Dataset Curators Dr. Wai Yan Nyein Naing (waiyan.nn18@gmail.com) ### Licensing Information This dataset is released under the MIT License, following the licensing of the original OpenAI HumanEval dataset. ### Citation Information If you use this benchmark dataset or the associated evaluation methodology, please cite both this repository and the original OpenAI HumanEval paper: ```bibtex @misc{burmese_coding_eval_2026, author = {Dr. Wai Yan Nyein Naing}, title = {Burmese Coding Evaluation Benchmark}, year = {2026}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\\url{https://github.com/WaiYanNyeinNaing/burmese-coding-eval}} } @article{chen2021codex, title={Evaluating Large Language Models Trained on Code}, author={Mark Chen and Jerry Tworek and Heewoo Jun and Qiming Yuan and Henrique Ponde de Oliveira Pinto and Jared Kaplan and Harri Edwards and Yuri Burda and Nicholas Joseph and Greg Brockman and Alex Ray and Raul Puri and Gretchen Krueger and Michael Petrov and Heidy Khlaaf and Girish Sastry and Pamela Mishkin and Brooke Chan and Scott Gray and Nick Ryder and Mikhail Pavlov and Alethea Power and Lukasz Kaiser and Mohammad Bavarian and Clemens Winter and Philippe Tillet and Felipe Petroski Such and Dave Cummings and Matthias Plappert and Fotis Chantzis and Elizabeth Barnes and Ariel Herbert-Voss and William Guss and Alex Nichol and Alex Paino and Nikolas Tezak and Jie Tang and Igor Babuschkin and Suchir Balaji and Shantanu Jain and William Saunders and Christopher Hesse and Andrew N. Carr and Jan Leike and Josh Achiam and Vedant Misra and Evan Morikawa and Alec Radford and Matthew Knight and Miles Brundage and Mira Murati and Katie Mayer and Peter Welinder and Bob McGrew and Dario Amodei and Sam McCandlish and Ilya Sutskever and Wojciech Zaremba}, year={2021}, eprint={2107.03374}, archivePrefix={arXiv}, primaryClass={cs.LG} } ```