WYNN747 commited on
Commit
1388ac5
·
verified ·
1 Parent(s): eda31f9

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +115 -19
README.md CHANGED
@@ -1,21 +1,117 @@
1
  ---
2
- dataset_info:
3
- features:
4
- - name: task_id
5
- dtype: string
6
- - name: prompt
7
- dtype: string
8
- - name: test
9
- dtype: string
10
- splits:
11
- - name: test
12
- num_bytes: 131183
13
- num_examples: 100
14
- download_size: 46827
15
- dataset_size: 131183
16
- configs:
17
- - config_name: default
18
- data_files:
19
- - split: test
20
- path: data/test-*
21
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - my
4
+ - en
5
+ license: mit
6
+ task_categories:
7
+ - text-generation
8
+ tags:
9
+ - code
10
+ - burmese
11
+ - human-eval
12
+ - programming
13
+ - evaluation
14
+ pretty_name: Burmese HumanEval
15
+ size_categories:
16
+ - n<1K
 
 
 
 
17
  ---
18
+
19
+ # Burmese HumanEval: A Benchmark for Evaluating Burmese Coding Assistants
20
+
21
+ ## Dataset Description
22
+
23
+ - **Homepage:** [GitHub - burmese-coding-eval](https://github.com/WaiYanNyeinNaing/burmese-coding-eval)
24
+ - **Paper:** Evaluating Large Language Models Trained on Code (Chen et al., 2021)
25
+ - **Language(s):** Burmese (my), English (en), Python
26
+ - **License:** MIT
27
+
28
+ ### Dataset Summary
29
+
30
+ The **Burmese HumanEval** dataset is an expansion and translation of the original OpenAI HumanEval benchmark, tailored specifically for evaluating the coding abilities of Large Language Models (LLMs) in the Burmese language. It consists of 100 programming problems designed to test functional correctness, language understanding, and problem-solving skills in Burmese contexts.
31
+
32
+ This dataset pairs with the **Burmese Coding Evaluation Suite**, a novel 4-track evaluation methodology (Functional, Rubric-based, Linguistic, and Statistical) developed to robustly score Burmese programming assistants.
33
+
34
+ ### Supported Tasks and Leaderboards
35
+
36
+ - `text-generation`: This dataset is designed to evaluate code generation capabilities given a Burmese natural language instruction or docstring. Models are typically evaluated using the pass@k metric (functional correctness) alongside qualitative LLM-as-a-judge scoring.
37
+
38
+ ### Languages
39
+
40
+ The natural language instructions and docstrings are in Burmese (`my`), while the programming language for the code is Python (`en`/`python`).
41
+
42
+ ## Dataset Structure
43
+
44
+ ### Data Instances
45
+
46
+ Each instance in the dataset represents a programming task.
47
+ An example instance looks like this:
48
+
49
+ ```json
50
+ {
51
+ "task_id": "HumanEval/0",
52
+ "prompt": "```burmese\nfrom typing import List\n\ndef has_close_elements(numbers: List[float], threshold: float) -> bool:\n \"\"\" ပေးထားသော ဂဏန်းများစာရင်း (list) ထဲတွင် မည်သည့် ဂဏန်းနှစ်ခုမဆို အချင်းချင်း အကွာအဝေးသည် သတ်မှတ်ထားသော threshold ထက် နည်းပါသလားဆိုတာ စစ်ဆေးပါ။ ...",
53
+ "test": "\n\nMETADATA = {\n 'author': 'jt',\n 'dataset': 'test'\n}\n\n\ndef check(candidate):\n assert candidate([1.0, 2.0, 3.9, 4.0, 5.0, 2.2], 0.3) == True\n..."
54
+ }
55
+ ```
56
+
57
+ ### Data Fields
58
+
59
+ * `task_id`: A string indicating the task identifier, mapping to the original HumanEval ID.
60
+ * `prompt`: A string containing the function signature and a well-defined docstring describing the task in Burmese.
61
+ * `test`: A string containing the test cases (`check` function) to functionally evaluate the generated code.
62
+
63
+ ## Dataset Creation
64
+
65
+ ### Curation Rationale
66
+
67
+ There is a significant lack of standardized coding evaluation benchmarks for the Burmese language. To address this, we standardized the dataset creation process by translating and expanding the canonical OpenAI HumanEval benchmark, allowing for standardized functional testing of Burmese coding models.
68
+
69
+ ### Source Data
70
+
71
+ #### Initial Data Collection and Normalization
72
+
73
+ The originating English instructions from the OpenAI HumanEval dataset were translated, localized, and technically normalized into formal Burmese by domain experts.
74
+
75
+ #### Who are the source language producers?
76
+
77
+ The original dataset instances were authored by the OpenAI Codex team. The Burmese translations and adaptations were produced by Dr. Wai Yan Nyein Naing and contributors to the `burmese-coding-eval` repository.
78
+
79
+ ## Considerations for Using the Data
80
+
81
+ ### Social Impact of Dataset
82
+
83
+ This dataset aims to democratize access to AI coding tools for Burmese speakers, encouraging the development of robust, multilingual code generation models that can reliably serve developers in Myanmar and beyond.
84
+
85
+ ## Additional Information
86
+
87
+ ### Dataset Curators
88
+
89
+ Dr. Wai Yan Nyein Naing (waiyan.nn18@gmail.com)
90
+
91
+ ### Licensing Information
92
+
93
+ This dataset is released under the MIT License, following the licensing of the original OpenAI HumanEval dataset.
94
+
95
+ ### Citation Information
96
+
97
+ If you use this benchmark dataset or the associated evaluation methodology, please cite both this repository and the original OpenAI HumanEval paper:
98
+
99
+ ```bibtex
100
+ @misc{burmese_coding_eval_2026,
101
+ author = {Dr. Wai Yan Nyein Naing},
102
+ title = {Burmese Coding Evaluation Benchmark},
103
+ year = {2026},
104
+ publisher = {GitHub},
105
+ journal = {GitHub repository},
106
+ howpublished = {\\url{https://github.com/WaiYanNyeinNaing/burmese-coding-eval}}
107
+ }
108
+
109
+ @article{chen2021codex,
110
+ title={Evaluating Large Language Models Trained on Code},
111
+ author={Mark Chen and Jerry Tworek and Heewoo Jun and Qiming Yuan and Henrique Ponde de Oliveira Pinto and Jared Kaplan and Harri Edwards and Yuri Burda and Nicholas Joseph and Greg Brockman and Alex Ray and Raul Puri and Gretchen Krueger and Michael Petrov and Heidy Khlaaf and Girish Sastry and Pamela Mishkin and Brooke Chan and Scott Gray and Nick Ryder and Mikhail Pavlov and Alethea Power and Lukasz Kaiser and Mohammad Bavarian and Clemens Winter and Philippe Tillet and Felipe Petroski Such and Dave Cummings and Matthias Plappert and Fotis Chantzis and Elizabeth Barnes and Ariel Herbert-Voss and William Guss and Alex Nichol and Alex Paino and Nikolas Tezak and Jie Tang and Igor Babuschkin and Suchir Balaji and Shantanu Jain and William Saunders and Christopher Hesse and Andrew N. Carr and Jan Leike and Josh Achiam and Vedant Misra and Evan Morikawa and Alec Radford and Matthew Knight and Miles Brundage and Mira Murati and Katie Mayer and Peter Welinder and Bob McGrew and Dario Amodei and Sam McCandlish and Ilya Sutskever and Wojciech Zaremba},
112
+ year={2021},
113
+ eprint={2107.03374},
114
+ archivePrefix={arXiv},
115
+ primaryClass={cs.LG}
116
+ }
117
+ ```