task stringlengths 22 52 | category stringclasses 7 values | research_problem stringlengths 14 29 | dataset stringlengths 12 40 | metric stringclasses 9 values | metadata.yaml stringlengths 954 1.15k | project_description.md stringlengths 2.33k 26.1k | prepare.py stringlengths 3.09k 4.78k | evaluate_prepare.py stringlengths 2.18k 3.84k | evaluate.py stringlengths 1.67k 4.81k | custom_labels.py stringlengths 1.45k 5.36k ⌀ | utils.py stringclasses 5 values |
|---|---|---|---|---|---|---|---|---|---|---|---|
CodeGenerationAPPSPassAt5 | Code | Code Generation | codeparrot/apps | Pass@5 | metric_lower_is_better: false
file_export_globs:
- submission.csv
container_python_requirements:
- datasets==4.0.0
evaluate_container_python_requirements:
- datasets==4.0.0
- torchmetrics
- pandas
- numpy
- torch
- pyext
prepare_code_python_requirements:
- pyext
logging_info:
name: CodeGenerationAPPSPassAt5
category: Code
research_problem: Code Generation
output_type: Text Generation
dataset: codeparrot/apps
config: all
metric: Pass@5
input_columns:
- question
- starter_code
scoring_column: input_output
train_split: train
test_split: test
shape:
- 5000
- 5
custom_gold_labels: true
custom_rad_class: false
sota:
- sota_paper_title: 'Code Llama: Open Foundation Models for Code'
sota_paper_url: https://arxiv.org/pdf/2308.12950
sota_score: 0.187
sota_year: 2023
sota_venue: Model technical report
dataset_paper_url: https://arxiv.org/abs/2105.09938
estimated_worst_score: 0.0
optimal_score: 1.0
| # Overview
## Task Description
This is a Machine Learning project and your goal is to build a model that solves the project's TASK following the instructions provided below.
TASK: Your task is to generate five independent Python programs for each competitive-programming problem. Each program must read from standard input and write to standard output and is intended to solve the given problem completely. A prediction for a problem is considered correct if at least one of the five submitted programs passes all of the official hidden test cases. Your predictions will be scored using the Pass@5 metric, which measures the fraction of test problems solved by at least one of the five attempts.
## Data
### Dataset Structure
The default config of the APPS dataset has the following structure.
Here is a description of the contents of each column including their name, what they contain, and the data type:
```
{
"problem_id": int, // Unique identifier of the problem
"question": string, // Full natural-language statement of the problem
"solutions": string, // JSON-encoded list of known correct Python solutions; may be an empty string if none
"input_output": string, // JSON-encoded dictionary of official test data; keys:
// "inputs": list of input strings,
// "outputs": list of expected output strings,
// "fn_name": (optional) required function name
"difficulty": string, // Difficulty label: "introductory", "interview", or "competition"
"url": string, // Source URL of the problem statement
"starter_code": string // (Optional) starter template code; may be empty
}
```
> Note: Only the `question` and `starter_code` columns are provided in the test data. Ensure you preserve the original row order of the test set when producing predictions.
Train data can be loaded using:
```python
from datasets import load_from_disk
train_dataset = load_from_disk('./data/train')
```
Test data can be loaded using:
```python
from datasets import load_from_disk
test_dataset = load_from_disk('./data/test')
```
### Submission file
The submission file should contain the selected answers for the test set. The submission file should be a csv file named `submission.csv` with the following header:
```
code1,code2,code3,code4,code5
```
and shape `(5000, 5)` - one row per problem and five Python programs per row, aligned with the test set order.
Each cell must contain valid Python source code as a single string. Do not include any extra columns or indices.
### Evaluation
The evaluation will be performed on the `submission.csv` file you submit, using the **Pass@5** metric.
For each problem, all 5 submitted solutions are executed against **all test cases** (both public and private).
A problem counts as correct if **at least one** of the 5 submissions passes **every** test case.
---
#### Execution Environment
* **Python version**: All code is executed with **Python 3.10**. Ensure compatibility with this version.
---
#### Resource Limits
* **Time limit** – A fixed limit of about 4 seconds is applied to each test run using Python’s `signal.alarm`.
* **Memory limit** – Is set to the default operating-system memory limit.
If the program exceeds the time limit or any configured memory cap, it is interrupted and the test case fails.
---
#### Sandbox Restrictions (enforced automatically)
Submitted code runs with selected built-in operations disabled:
* **No process creation or shell commands**
Functions such as `os.system`, `os.fork`, `os.kill`, and `subprocess.Popen` are replaced with harmless stubs.
* **Limited file-system operations**
Destructive functions like `os.remove`, `shutil.rmtree`, and `os.rename` are disabled.
* **Partial environment hardening**
Certain environment-changing operations (e.g. `os.putenv`, `os.chdir`) and some optional modules are disabled.
These protections mainly work by setting restricted functions to `None`.
Attempting to call them typically raises a `TypeError`, causing the test to fail.
---
#### Code Safety Guidelines
* Write code that **does not attempt** to access the internet, execute system commands, or write to arbitrary files.
* Any such behavior may lead to **disqualification** even if it does not trigger runtime errors.
---
This process ensures consistent, resource-aware, and fair evaluation of all submitted solutions.
#### Evaluation Script
Evaluation will be done using the following script:
```py
import argparse, json, sys
import pandas as pd
from datasets import load_from_disk
import json
import multiprocessing
import numpy as np
from tqdm import tqdm
from concurrent.futures import ThreadPoolExecutor, as_completed
import faulthandler
# used for debugging to time steps
from datetime import datetime
# to run the solution files we're using a timing based approach
import signal
# for capturing the stdout
from io import StringIO
# used for testing the code that reads from input
from unittest.mock import patch, mock_open
from pyext import RuntimeModule
from enum import Enum
class CODE_TYPE(Enum):
call_based = 0
standard_input = 1
# stuff for setting up signal timer
class TimeoutException(Exception):
pass
def timeout_handler(signum, frame):
print("alarm went off")
#return
raise TimeoutException
signal.signal(signal.SIGALRM, timeout_handler)
timeout = 4 # seconds
# used to capture stdout as a list
# from https://stackoverflow.com/a/16571630/6416660
# alternative use redirect_stdout() from contextlib
class Capturing(list):
def __enter__(self):
self._stdout = sys.stdout
sys.stdout = self._stringio = StringIO()
# Make closing the StringIO a no-op
self._stringio.close = lambda x: 1
return self
def __exit__(self, *args):
self.extend(self._stringio.getvalue().splitlines())
del self._stringio # free up some memory
sys.stdout = self._stdout
def run_test(sample, test=None, debug=False):
"""
if test(generated_code) is not None it'll try to run the code.
otherwise it'll just return an input and output pair.
"""
# Disable functionalities that can make destructive changes to the test.
if debug:
print(f"start = {datetime.now().time()}")
try:
in_outs = json.loads(sample["input_output"])
except ValueError:
in_outs = None
if in_outs:
if in_outs.get("fn_name") is None:
which_type = CODE_TYPE.standard_input # Standard input
method_name = None
else:
which_type = CODE_TYPE.call_based # Call-based
method_name = in_outs["fn_name"]
if debug:
print(f"loaded input_output = {datetime.now().time()}")
if test is None:
return in_outs
elif test is not None:
results = []
sol = "import sys\nimport time\nimport itertools\nfrom itertools import accumulate, product, permutations, combinations\nimport collections\nfrom collections import Counter, OrderedDict, deque, defaultdict, ChainMap\nfrom functools import lru_cache\nimport math\nfrom math import sqrt, sin, cos, tan, ceil, fabs, floor, gcd, exp, log, log2\nimport fractions\nfrom typing import List, Tuple\nimport numpy as np\nimport random\nimport heapq\nfrom heapq import *\n"
if debug:
print(f"loading test code = {datetime.now().time()}")
if which_type == CODE_TYPE.call_based:
sol += test
if debug:
print(f"sol = {sol}")
signal.alarm(timeout)
try:
tmp_sol = RuntimeModule.from_string("tmp_sol", "", sol)
if "class Solution" not in test:
tmp = tmp_sol
else:
tmp = tmp_sol.Solution()
signal.alarm(0)
except Exception as e:
signal.alarm(0)
if debug:
print(f"type 0 compilation error = {e}")
results.append(-2)
return results
signal.alarm(0)
elif which_type == CODE_TYPE.standard_input:
# sol
tmp_test = test.split("\n")
new_test = []
for x in tmp_test:
if (not x.startswith("from ")) and (not x.startswith("import ")):
new_test.append("\t" + x + "\n")
else:
new_test.append(x + "\n")
tmp_test = new_test
new_test = ""
started = False
for i in tmp_test:
if i.startswith("\t") and not started:
new_test += "stdin = sys.stdin\nstdout = sys.stdout\n"
new_test += "def code():\n"
new_test += i
started = True
elif started and ((i.startswith("from ")) or (i.startswith("import "))):
new_test += "\t" + i
else:
new_test += i
tmp_test = new_test
sol += tmp_test
if debug:
print(f"sol = {sol}")
method_name = "code"
signal.alarm(timeout)
try:
tmp_sol = RuntimeModule.from_string("tmp_sol", "", sol)
tmp = tmp_sol
signal.alarm(0)
except Exception as e:
signal.alarm(0)
if debug:
print(f"type 1 compilation error = {e}")
results.append(-2)
return results
signal.alarm(0)
if debug:
print(f"get method = {datetime.now().time()}")
try:
method = getattr(tmp, method_name) # get_attr second arg must be str
except:
signal.alarm(0)
e = sys.exc_info()
print(f"unable to get function error = {e}")
results.append(-2)
return results
for index, inputs in enumerate(in_outs["inputs"]):
# JSON forces dictionaries to have string keys; this undoes this (assuming a singleton list)
try:
if isinstance(inputs[0], dict):
inputs = [{int(k): v for k,v in inputs[0].items()}]
except:
True
try:
if isinstance(in_outs["outputs"][index], dict):
in_outs["outputs"][index] = [{int(k): v for k,v in in_outs["outputs"][index].items()}]
except:
True
try:
if isinstance(in_outs["outputs"][index][0], dict):
in_outs["outputs"][index] = [{int(k): v for k,v in in_outs["outputs"][index][0].items()}]
except:
True
if debug:
print(f"time: {datetime.now().time()} testing index = {index} inputs = {inputs}, {type(inputs)}. type = {which_type}")
if which_type == CODE_TYPE.call_based: # Call-based
signal.alarm(timeout)
faulthandler.enable()
try:
output = method(*inputs)
# ground truth sequences are not tuples
if isinstance(output, tuple):
output = list(output)
tmp_result = output == in_outs["outputs"][index]
if isinstance(in_outs["outputs"][index], list) and in_outs["outputs"][index]:
tmp_result = tmp_result or (output == in_outs["outputs"][index][0])
# ground truth sequences are not tuples
try:
if isinstance(output[0], tuple):
tmp_result = tmp_result or ([list(x) for x in output] == in_outs["outputs"][index][0])
except:
True
results.append(tmp_result)
# reset the alarm
signal.alarm(0)
except Exception as e:
signal.alarm(0)
faulthandler.disable()
if debug:
print(f"Standard input runtime error or time limit exceeded error = {e}")
results.append(-1)
continue
faulthandler.disable()
signal.alarm(0)
if debug:
print(f"outputs = {output}, test outputs = {in_outs['outputs'][index]}, inputs = {inputs}, {type(inputs)}, {output == [in_outs['outputs'][index]]}")
elif which_type == CODE_TYPE.standard_input: # Standard input
faulthandler.enable()
signal.alarm(timeout)
passed = False
if isinstance(inputs, list):
inputs = "\n".join(inputs)
if isinstance(in_outs['outputs'][index], list):
in_outs['outputs'][index] = "\n".join(in_outs['outputs'][index])
with Capturing() as output:
try:
call_method(method, inputs)
# reset the alarm
signal.alarm(0)
passed = True
except Exception as e:
# runtime error or took too long
signal.alarm(0)
print(f"Call-based runtime error or time limit exceeded error = {repr(e)}{e}")
results.append(-1)
signal.alarm(0)
if not passed:
if debug:
nl = "\n"
if not isinstance(inputs, list):
print(f"not passed output = {output}, test outputs = {in_outs['outputs'][index]}, inputs = {inputs.replace(nl,' new-line ')}, {type(inputs)}, {output == [in_outs['outputs'][index]]}")
else:
print(f"not passed output = {output}, test outputs = {in_outs['outputs'][index]}, inputs = {inputs}, {type(inputs)}, {output == [in_outs['outputs'][index]]}")
continue
if passed and debug:
print(f"==> output = {output}, test outputs = {in_outs['outputs'][index]}")
if custom_compare_(output, in_outs['outputs'][index]):
tmp_result = True
results.append(tmp_result)
continue
# ground truth sequences are expressed as lists not tuples
if isinstance(output, tuple):
output = list(output)
tmp_result = False
try:
tmp_result = (output == [in_outs["outputs"][index]])
if isinstance(in_outs["outputs"][index], list):
tmp_result = tmp_result or (output == in_outs["outputs"][index])
if isinstance(output[0], str):
tmp_result = tmp_result or ([e.strip() for e in output] == in_outs["outputs"][index])
except Exception as e:
if debug:
print(f"Failed check1 exception = {e}")
pass
if tmp_result == True:
results.append(tmp_result)
continue
# try one more time without \n
if isinstance(in_outs["outputs"][index], list):
for tmp_index, i in enumerate(in_outs["outputs"][index]):
in_outs["outputs"][index][tmp_index] = i.split("\n")
in_outs["outputs"][index][tmp_index] = [x.strip() for x in in_outs["outputs"][index][tmp_index] if x]
else:
in_outs["outputs"][index] = in_outs["outputs"][index].split("\n")
in_outs["outputs"][index] = list(filter(len, in_outs["outputs"][index]))
in_outs["outputs"][index] = list(map(lambda x:x.strip(), in_outs["outputs"][index]))
try:
tmp_result = (output == [in_outs["outputs"][index]])
if isinstance(in_outs["outputs"][index], list):
tmp_result = tmp_result or (output == in_outs["outputs"][index])
except Exception as e:
if debug:
print(f"Failed check2 exception = {e}")
pass
if tmp_result == True:
results.append(tmp_result)
continue
# try by converting the output into a split up list too
if isinstance(output, list):
output = list(filter(len, output))
if debug:
nl = "\n"
if not isinstance(inputs, list):
print(f"output = {output}, test outputs = {in_outs['outputs'][index]}, inputs = {inputs.replace(nl,' new-line ')}, {type(inputs)}, {output == [in_outs['outputs'][index]]}")
else:
print(f"output = {output}, test outputs = {in_outs['outputs'][index]}, inputs = {inputs}, {type(inputs)}, {output == [in_outs['outputs'][index]]}")
if tmp_result == True:
results.append(tmp_result)
continue
try:
tmp_result = (output == [in_outs["outputs"][index]])
if isinstance(in_outs["outputs"][index], list):
tmp_result = tmp_result or (output == in_outs["outputs"][index])
except Exception as e:
if debug:
print(f"Failed check3 exception = {e}")
pass
try:
output_float = [float(e) for e in output]
gt_float = [float(e) for e in in_outs['outputs'][index]]
tmp_result = tmp_result or ((len(output_float) == len(gt_float)) and np.allclose(output_float, gt_float))
except Exception as e:
pass
try:
if isinstance(output[0], list):
output_float = [float(e) for e in output[0]]
gt_float = [float(e) for e in in_outs['outputs'][index][0]]
tmp_result = tmp_result or ((len(output_float) == len(gt_float)) and np.allclose(output_float, gt_float))
except Exception as e:
pass
if tmp_result == True:
results.append(tmp_result)
continue
# try by converting the stuff into split up list
if isinstance(in_outs["outputs"][index], list):
for tmp_index, i in enumerate(in_outs["outputs"][index]):
in_outs["outputs"][index][tmp_index] = set(i.split())
else:
in_outs["outputs"][index] = set(in_outs["outputs"][index].split())
try:
tmp_result = (output == in_outs["outputs"][index])
except Exception as e:
if debug:
print(f"Failed check4 exception = {e}")
continue
if tmp_result == True:
results.append(tmp_result)
continue
# try by converting the output into a split up list too
if isinstance(output, list):
for tmp_index, i in enumerate(output):
output[tmp_index] = i.split()
output = list(filter(len, output))
for tmp_index, i in enumerate(output):
output[tmp_index] = set(i)
else:
output = output.split()
output = list(filter(len, output))
output = set(output)
try:
tmp_result = (set(frozenset(s) for s in output) == set(frozenset(s) for s in in_outs["outputs"][index]))
except Exception as e:
if debug:
print(f"Failed check5 exception = {e}")
# if they are all numbers, round so that similar numbers are treated as identical
try:
tmp_result = tmp_result or (set(frozenset(round(float(t),3) for t in s) for s in output) ==\
set(frozenset(round(float(t),3) for t in s) for s in in_outs["outputs"][index]))
except Exception as e:
if debug:
print(f"Failed check6 exception = {e}")
if tmp_result == True and debug:
print("PASSED")
results.append(tmp_result)
if debug:
nl = "\n"
if not isinstance(inputs, list):
print(f"output = {output}, test outputs = {in_outs['outputs'][index]}, inputs = {inputs.replace(nl,' new-line ')}, {type(inputs)}, {output == [in_outs['outputs'][index]]}")
else:
print(f"output = {output}, test outputs = {in_outs['outputs'][index]}, inputs = {inputs}, {type(inputs)}, {output == [in_outs['outputs'][index]]}")
return results
def custom_compare_(output, ground_truth):
if isinstance(output, list):
output_1 = "\n".join(output)
if stripped_string_compare(output_1, ground_truth):
return True
if isinstance(output, list):
output_2 = [o.lstrip().rstrip() for o in output]
output_2 = "\n".join(output_2)
if stripped_string_compare(output_2, ground_truth):
return True
return False
def stripped_string_compare(s1, s2):
s1 = s1.lstrip().rstrip()
s2 = s2.lstrip().rstrip()
return s1 == s2
def call_method(method, inputs):
if isinstance(inputs, list):
inputs = "\n".join(inputs)
inputs_line_iterator = iter(inputs.split("\n"))
# sys.setrecursionlimit(10000)
# @patch('builtins.input', side_effect=inputs.split("\n"))
@patch('builtins.open', mock_open(read_data=inputs))
@patch('sys.stdin', StringIO(inputs))
@patch('sys.stdin.readline', lambda *args: next(inputs_line_iterator))
@patch('sys.stdin.readlines', lambda *args: inputs.split("\n"))
@patch('sys.stdin.read', lambda *args: inputs)
# @patch('sys.stdout.write', print)
def _inner_call_method(_method):
try:
return _method()
except SystemExit as e:
pass
finally:
pass
return _inner_call_method(method)
def solves_testcases(submission, testcases, verbose=False):
"""
Write submission once to a temp file and run it against all testcases.
"""
timeout = 10
def _temp_run(sample, generation, debug, result):
result.append(run_test(sample, test=generation, debug=debug))
manager = multiprocessing.Manager()
result = manager.list()
p = multiprocessing.Process(
target=_temp_run,
args=(testcases, submission, verbose, result)
)
p.start()
p.join(timeout=timeout + 1)
if p.is_alive():
p.kill()
if not result:
in_outs = json.loads(testcases["input_output"])
# consider that all tests failed
result = [[-1 for i in range(len(in_outs["inputs"]))]]
if verbose:
print("global timeout")
fixed = []
for e in result:
if isinstance(e, np.ndarray):
e = e.item(0)
if isinstance(e, np.bool_):
e = bool(e)
fixed.append(e)
return np.all(fixed)
def _passes_any_submission(submissions, testcases, verbose=False):
for submission in submissions:
if solves_testcases(submission, testcases, verbose):
return 1 # counts as correct
return 0
def evaluate_all_testcases(submissions_all, testcases_all, verbose=False, max_workers=None):
assert len(submissions_all) == len(testcases_all), "Number of submissions and testcases do not match"
total = len(submissions_all)
with ThreadPoolExecutor(max_workers=max_workers) as ex:
futures = [
ex.submit(_passes_any_submission, submissions, testcases, verbose)
for submissions, testcases in zip(submissions_all, testcases_all)
]
correct = 0
for fut in tqdm(as_completed(futures), total=total):
correct += fut.result()
return correct / total
def load_testcases(path="./data/test_with_labels"):
"""
Load testcases for evaluation.
"""
ds = load_from_disk(path)
return ds
def evaluate(submissions, testcases):
"""
Compute Pass@5 metric for a list of submissions and testcases.
"""
passAt5 = evaluate_all_testcases(submissions, testcases)
return {"Pass@5": passAt5}
def _cli():
p = argparse.ArgumentParser(description="Evaluate Pass@5 using submission.csv")
p.add_argument("--submission-file", required=True,
help="Path to CSV with columns code1..code5")
a = p.parse_args()
print("Loading test set…")
testcases = load_testcases()
n_test_samples = len(testcases)
print(f"Loading submissions from: {a.submission_file}")
submission_df = pd.read_csv(a.submission_file, header=0)
submission_scripts = submission_df[[f'code{i}' for i in range(1, 6)]].values.tolist()
n_submissions = len(submission_scripts)
assert n_submissions == n_test_samples, f"Submission file row count ({n_submissions}) does not match test set size ({n_test_samples})."
print("Evaluating Pass@5…")
result = evaluate(submission_scripts, testcases)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == "__main__":
_cli()
```
| # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse
import os
import sys
import logging
from datasets import load_from_disk
# Configure logger with custom prefix
logger = logging.getLogger('dataset_code')
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `dataset_code`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Loads test set from airsbench_raw_data_dir into agent_data_mount_dir.
Loads submission.csv from agent_log_dir into agent_data_mount_dir.
:param airsbench_raw_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
dataset_source_fpath = os.path.join(global_shared_data_dir, 'codeparrot/apps/all')
dataset = load_from_disk(dataset_source_fpath)
train = dataset["train"]
test = dataset["test"]
train.save_to_disk(os.path.join(agent_data_mount_dir, 'train'))
test = test.remove_columns(['problem_id', 'solutions', 'input_output', 'difficulty', 'url'])
test.save_to_disk(os.path.join(agent_data_mount_dir, 'test'))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench_raw_data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AirsBench raw data directory: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
from json import load
import os
import sys
import argparse
import logging
import shutil
from datasets import load_from_disk
# Configure logger with custom prefix
SCRIPT_NAME = 'evaluate_prepare'
logger = logging.getLogger(SCRIPT_NAME)
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `SCRIPT_NAME`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Loads test set from airsbench_raw_data_dir into agent_data_mount_dir.
Loads submission.csv from agent_log_dir into agent_data_mount_dir.
:param airsbench_raw_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
dataset_source_fpath = os.path.join(global_shared_data_dir, 'codeparrot/apps/all')
dataset = load_from_disk(dataset_source_fpath)
test = dataset["test"]
test.save_to_disk(os.path.join(agent_data_mount_dir, "test_with_labels"))
submission_fpath = os.path.join(agent_log_dir, "submission.csv")
shutil.copy2(submission_fpath, os.path.join(agent_data_mount_dir, "submission.csv"))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench_raw_data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AIRSBench global_shared_data_dir: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse, json
import pandas as pd
import json
from pathlib import Path
from datasets import load_from_disk
from utils import evaluate_all_testcases
def load_testcases(path="./data/test_with_labels"):
"""
Load testcases for evaluation.
"""
ds = load_from_disk(path)
return ds
def evaluate(submissions, testcases):
"""
Compute Pass@5 metric for a list of submissions and testcases.
"""
passAt5 = evaluate_all_testcases(submissions, testcases)
return {"Pass@5": passAt5}
def _cli():
p = argparse.ArgumentParser(description="Evaluate Pass@5 using submission.csv")
p.add_argument("--submission-file", required=True,
help="Path to CSV with columns code1..code5")
a = p.parse_args()
print("Loading test set…")
testcases = load_testcases()
n_test_samples = len(testcases)
print(f"Loading submissions from: {a.submission_file}")
submission_df = pd.read_csv(a.submission_file, header=0)
submission_scripts = submission_df[[f'code{i}' for i in range(1, 6)]].values.tolist()
n_submissions = len(submission_scripts)
assert n_submissions == n_test_samples, f"Submission file row count ({n_submissions}) does not match test set size ({n_test_samples})."
print("Evaluating Pass@5…")
result = evaluate(submission_scripts, testcases)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == "__main__":
_cli()
| # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse
import os
import json
import pandas as pd
from random import random
from datasets import load_from_disk
from pathlib import Path
def parse_args():
parser = argparse.ArgumentParser(
description="Make gold labels for the APPS dataset for Code Generation."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench-raw-data"
)
parser.add_argument(
"--output-directory",
default=Path(__file__).resolve().parent,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
return parser.parse_args()
def main(args):
dataset_source_fpath = os.path.join(
Path(args.global_shared_data_dir),
'codeparrot/apps/all'
)
dataset = load_from_disk(dataset_source_fpath)
test = dataset["test"]
codes = []
codes_perm_1 = []
codes_perm_2 = []
incorrect_solution = "print('invalid')"
for idx in range(len(test)):
sample = test[idx]
try:
# problematic code - errors if correct sample code doesn't exist
sample_solution = json.loads(sample["solutions"])[0]
except:
sample_solution = incorrect_solution
codes.append(sample_solution)
rng = random()
if rng < 0.33:
codes_perm_1.append(sample_solution)
codes_perm_2.append(sample_solution)
elif rng < 0.66:
codes_perm_1.append(incorrect_solution)
codes_perm_2.append(sample_solution)
else:
codes_perm_1.append(incorrect_solution)
codes_perm_2.append(incorrect_solution)
df = pd.DataFrame({f"code{i}": codes for i in range(1, 6)})
df_perm_1 = pd.DataFrame({f"code{i}": codes_perm_1 for i in range(1, 6)})
df_perm_2 = pd.DataFrame({f"code{i}": codes_perm_2 for i in range(1, 6)})
# Save to CSV
save_path = Path(args.output_directory).expanduser() / "gold_submission.csv"
df.to_csv(save_path, index=False)
save_path = Path(args.output_directory).expanduser() / "gold_submission_permuted_1.csv"
df_perm_1.to_csv(save_path, index=False)
save_path = Path(args.output_directory).expanduser() / "gold_submission_permuted_2.csv"
df_perm_2.to_csv(save_path, index=False)
if __name__ == '__main__':
args = parse_args()
main(args)
| # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import json
import multiprocessing
import numpy as np
from tqdm import tqdm
from testing_util import run_test
from concurrent.futures import ThreadPoolExecutor, as_completed
def solves_testcases(submission, testcases, verbose=False):
"""
Write submission once to a temp file and run it against all testcases.
"""
timeout = 10
def _temp_run(sample, generation, debug, result):
result.append(run_test(sample, test=generation, debug=debug))
manager = multiprocessing.Manager()
result = manager.list()
p = multiprocessing.Process(
target=_temp_run,
args=(testcases, submission, verbose, result)
)
p.start()
p.join(timeout=timeout + 1)
if p.is_alive():
p.kill()
if not result:
in_outs = json.loads(testcases["input_output"])
# consider that all tests failed
result = [[0 for i in range(len(in_outs["inputs"]))]]
if verbose:
print("global timeout")
result = result[0]
fixed = []
for e in result:
if isinstance(e, np.ndarray):
e = e.item(0)
if isinstance(e, np.bool_):
e = bool(e)
fixed.append(e)
return np.all(fixed)
def _passes_any_submission(submissions, testcases, verbose=False):
for submission in submissions:
if solves_testcases(submission, testcases, verbose):
return 1 # counts as correct
return 0
def evaluate_all_testcases(submissions_all, testcases_all, verbose=False, max_workers=None):
assert len(submissions_all) == len(testcases_all), "Number of submissions and testcases do not match"
total = len(submissions_all)
with ThreadPoolExecutor(max_workers=max_workers) as ex:
futures = [
ex.submit(_passes_any_submission, submissions, testcases, verbose)
for submissions, testcases in zip(submissions_all, testcases_all)
]
correct = 0
for fut in tqdm(as_completed(futures), total=total):
correct += fut.result()
return correct / total
|
CodeRetrievalCodeXGlueMRR | Code | Code Retrieval | google/code_x_glue_tc_nl_code_search_adv | MRR | metric_lower_is_better: false
file_export_globs:
- submission.csv
container_python_requirements:
- datasets==4.0.0
evaluate_container_python_requirements:
- datasets==4.0.0
logging_info:
name: CodeRetrievalCodeXGlueMRR
dataset: google/code_x_glue_tc_nl_code_search_adv
category: Code
research_problem: Code Retrieval
output_type: Search results
config: default
train_split: train
test_split: test
input_columns:
- docstring_tokens
- id
scoring_column: id
shape: (19210,2)
custom_gold_labels: true
custom_rad_class: false
metric: MRR
additional_metrics: null
sota:
- sota_paper_title: 'UniXcoder: Unified Cross-Modal Pre-training for Code Representation'
sota_paper_url: https://arxiv.org/pdf/2203.03850
sota_score: 0.6113
sota_notes: from official leaderboard https://microsoft.github.io/CodeXGLUE/
sota_year: 2022
sota_venue: ACL
dataset_paper_url: https://arxiv.org/abs/2102.04664
estimated_worst_score: 0.0
optimal_score: 1.0
|
# Overview
## Task Description
This is a Machine Learning project and your goal is to complete the project's TASK following the instructions provided below.
TASK: This is a NLP task to perform Code retrieval on google/code_x_glue_tc_nl_code_search_adv.
## Data
### Dataset Structure
This is a retrieval task, so for each split (other than test) there is a set of search queries (`./data/<split>/queries_with_labels`) along with a large corpus to search over (`./data/<split>/search_corpus`).
Each item in the corpus is structured as: `{'id': Value('int32'), 'code': Value('string')}`. For example:
```
{'id': 0,
'code': 'def Func(arg_0, arg_1=\'.\', arg_2=True, arg_3=False, **arg_4):\n arg_5 = get_content(rebuilt_url(arg_0))\n arg_6 = json.loads(match1(arg_5, r\'qualities":({.+?}),"\'))\n arg_7 = match1(arg_5, r\'"video_title"\\s*:\\s*"([^"]+)"\') or \\\n match1(arg_5, r\'"title"\\s*:\\s*"([^"]+)"\')\n arg_7 = unicodize(arg_7)\n\n for arg_8 in [\'1080\',\'720\',\'480\',\'380\',\'240\',\'144\',\'auto\']:\n try:\n arg_9 = arg_6[arg_8][1]["url"]\n if arg_9:\n break\n except KeyError:\n pass\n\n arg_10, arg_11, arg_12 = url_info(arg_9)\n\n print_info(site_info, arg_7, arg_10, arg_12)\n if not arg_3:\n download_urls([arg_9], arg_7, arg_11, arg_12, arg_1=arg_1, arg_2=arg_2)'}
```
For the train and validation splits, you are provided the correct index result for each query.
The query set with labels is structed as `{'id': Value('int32'), 'query': Value('string')}` where id is the correct index from the corpus.
For example, a row would be
```
{'id': 0, 'query': 'Downloads Sina videos by URL .'}
```
You can load train data with `load_from_disk('./data/train/search_corpus')` and `load_from_disk('./data/train/queries_with_labels')`
You can load validation data with `load_from_disk('./data/validation/search_corpus')` and `load_from_disk('./data/validation/queries_with_labels')`
where `load_from_disk` is imported `from datasets import load_from_disk`.
You can load test data with `load_from_disk('./data/test/search_corpus')` and `load_from_disk('./data/test/queries')`.
Note that the correct `id` label for the test queries has been removed, i.e `{'query': 'Downloads Dailymotion videos by URL .'}`
### Submission file
You need to submit a CSV with header: query, rankings
where rankings is json.dumps([list of ranked code ids])
For example if your submissions are in the form of a huggingface dataset
you could do:
```
def save_hugginface_dataset_as_csv(dds, output_fpath):
"""
Takes a huggingface dataset with columns query: str, rankings: [list of ranked code ids]
Saves as a CSV with header: query,rankings
where rankings is json.dumps([list of ranked code ids])
"""
dds = dds.map(
lambda example: {
"rankings": json.dumps(example["rankings"])
}
)
df = dds.to_pandas()
df.to_csv(output_fpath, index=False, header=["query", "rankings"])
```
The head of an example submission.csv would be
```
query,rankings
str - > list Convert XML to URL List . From Biligrab .,"[4773, 10566, 18730, 11359, 16173, 17791, 3428, 4163, 2037, 6838, 2336, 792, 15939, 282, 18883, 10090, 16583, 9041, 5028, 6885, 3809, 7866, 1581, 14613, 1873, 12513, 13734, 4063, 12427, 5984, 4533, 1711, 7378, 1481, 18669, 9190, 17151, 3966, 18913, 15831, 17524, 16150, 12175, 19138, 4662, 17724, 7578, 13530, 14139, 11756, 12014, 6126, 3148, 5176, 13260, 1120, 5799, 718, 5691, 14633, 7990, 2459, 6309, 4778, 8468, 0, 7473, 18590, 3227, 305, 12687, 16419, 3621, 17969, 17759, 7338, 12346, 9032, 15906, 14930, 11270, 7319, 5423, 4218, 8952, 14254, 11863, 18073, 4973, 3067, 3340, 13478, 7898, 6132, 699, 2527, 12903, 8961, 7260, 12805, 17477, 3637, 15206, 1167, 9969, 16952, 7530, 14532, 8599, 17194, 341, 2399, 480, 15207, 16079, 10442, 1354, 18494, 18059, 17307, 8984, 4358, 6874, 11557, 16559, 12936, 12671, 16181, 552, 4913, 11228, 18668, 13003, 9595, 2748, 10221, 4108, 7886]"
Downloads Sina videos by URL .,"[14233, 18494, 2339, 18240, 12558, 2155, 17809, 3995, 10983, 8795, 3908, 15402, 143, 1670, 6689, 15988, 797, 11177, 5111, 1217, 3256, 8938, 1858, 18281, 14473, 8128, 1, 11149, 11423, 1812, 10327, 14244, 3569, 9551, 6388, 1829, 18118, 15332, 1245, 11551, 9383, 14727, 4162, 8270, 7121, 15307, 11203, 17898, 11047, 13513, 9972, 18078, 106, 5244, 14085, 7204, 19157, 5438, 18355, 10039, 1610, 6012, 16207, 11308, 18246, 17214, 37, 14335, 4696, 5671, 5673, 10577, 7152, 10395, 16792, 12104]"
Format text with color or other effects into ANSI escaped string .,"[11572, 7666, 2401, 14887, 5944, 11933, 2718, 10631, 7455, 16890, 10310, 6189, 60, 10529, 8005, 1052, 13208, 910, 5802, 2, 13390, 18448, 5052, 7469, 19103, 17611, 1495, 10175, 11936, 17764, 10045, 1140, 14181, 5388, 9579, 5193, 1757, 8066, 10604, 13277, 12231, 11085, 13859, 6252, 16010, 12249, 6778, 3444, 18797, 15768, 11982, 3507, 18830, 5747, 15577, 6395, 9371, 16578, 9868, 14335, 12163, 9038, 7914, 5210, 18743, 2042, 8736, 17465, 5012, 13136, 3700, 17616, 10176, 12175, 3621, 7553, 15336, 9605, 15419]"
...
```
The submission.csv should be of shape (19210,2).
The rankings list can be anywhere from length 1 (just providing top search index) or providing rankings for all indexes in search corpus.
### Evalution
The evaluation will be performed on the `submission.csv` file you have submitted using the MRR metric. Here is the evaluation script that will be used:
```py
#!/usr/bin/env python3
import argparse, json
import numpy as np
import pandas as pd
import torch
from datasets import load_dataset, load_from_disk
from utils import calculate_scores
def load_test_set():
return load_from_disk('./data/test_with_labels')
def evaluate(predictions, labels):
"""
Returns a dict of metric_name -> value
"""
# Predictions should be pd.DataFrame with columns: query: str, rankings: json.dumps([list of ranked code ids])
# Labels should be hf Dataset with keys query: str, id: code id
# First json.loads the rankings column of predictions
predictions['rankings'] = predictions['rankings'].apply(json.loads)
# Map to format for calculate_scores
# Predictions are {url: str -> [list of ranked code ids]}
# Labels are {url: str -> code id}
# We'll use the query as the url for both
formatted_predictions = {
q: pred.tolist() if isinstance(pred, np.ndarray) else pred
for q, pred in zip(predictions['query'], predictions['rankings'])
}
formatted_labels = {
q: label
for q, label in zip(labels['query'], labels['id'])
}
return calculate_scores(formatted_labels, formatted_predictions)
def _cli():
p = argparse.ArgumentParser(
description="Evaluate predictions"
)
p.add_argument("--submission-file", default="submission.csv",
help="Path to CSV file containing predictions.")
a = p.parse_args()
print("Loading test set labels...")
labels = load_test_set()
n_test_samples = len(labels)
print(f"Loaded {n_test_samples} labels.")
print(f"Loading predictions from: {a.submission_file}")
try:
# Assuming CSV has no header and contains only prediction values
# Adjust if your submission format is different (e.g., has headers, specific columns)
preds = pd.read_csv(a.submission_file, header=0)
if preds.shape[0] != n_test_samples:
raise ValueError(
f"Submission file row count ({preds.shape[0]}) "
f"does not match test set size ({n_test_samples})."
)
# Further shape validation could be added here against spec['shape_per_item']
# e.g., if preds.shape[1:] != spec['shape_per_item']
except FileNotFoundError:
p.error(f"Submission file not found: {a.submission_file}")
except Exception as e:
p.error(f"Error loading submission_file: {e}")
print("Evaluating predictions...")
result = evaluate(preds, labels)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == '__main__':
_cli()
```
| # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse
import os
import sys
import logging
import re
from datasets import load_dataset, load_from_disk
# Configure logger with custom prefix
logger = logging.getLogger('dataset_code')
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `dataset_code`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def select_columns(dds, columns):
to_drop = [c for c in dds.column_names if c not in columns]
dds_small = dds.remove_columns(to_drop)
return dds_small
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
For each split we create:
- <split>_search_corpus: A dataset containing the code snippets to be searched.
- <split>_queries_with_labels: A dataset containing the natural language queries and the corresponding code snippet labels.
We also provide test_queries which is a dataset containing only the natural language queries without labels.
The agent has to generate predictions for these queries and return them in the submission.csv file.
:param global_shared_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
# Load from the raw data directory
dataset_source_fpath = os.path.join(global_shared_data_dir, 'google/code_x_glue_tc_nl_code_search_adv/default')
dataset = load_from_disk(dataset_source_fpath)
for split in ['train', 'validation', 'test']:
dds = dataset[split]
# Search corpus is just id and code with docstring removed from code
search_corpus = select_columns(dds, ["id", "code_tokens"])
search_corpus = search_corpus.map(
lambda example: {
"code": ' '.join(example['code_tokens'])
},
remove_columns=["code_tokens"]
)
search_corpus.save_to_disk(os.path.join(agent_data_mount_dir, f'{split}/search_corpus'))
# Queries are just docstring and the resulting code id
queries_with_labels = select_columns(dds, ["docstring_tokens", "id"])
queries_with_labels = queries_with_labels.map(
lambda example: {
"query": " ".join(example["docstring_tokens"])
},
remove_columns=["docstring_tokens"]
)
# shuffle the queries
queries_with_labels = queries_with_labels.shuffle(seed=42)
if split == 'test':
# For the test set we do not provide the labels
queries = select_columns(queries_with_labels, ["query"])
queries.save_to_disk(os.path.join(agent_data_mount_dir, f'test/queries'))
else:
queries_with_labels.save_to_disk(os.path.join(agent_data_mount_dir, f'{split}/queries_with_labels'))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench-raw-data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AirsBench raw data directory: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
) | #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import os
import sys
import argparse
import logging
import re
import shutil
from datasets import load_from_disk
# Configure logger with custom prefix
SCRIPT_NAME = 'evaluate_prepare'
logger = logging.getLogger(SCRIPT_NAME)
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `SCRIPT_NAME`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def select_columns(dds, columns):
to_drop = [c for c in dds.column_names if c not in columns]
dds_small = dds.remove_columns(to_drop)
return dds_small
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Loads test set from airsbench_raw_data_dir into agent_data_mount_dir.
Loads submission.csv from agent_log_dir into agent_data_mount_dir.
:param airsbench_raw_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
# Load from the raw data directory
dataset_source_fpath = os.path.join(global_shared_data_dir, 'google/code_x_glue_tc_nl_code_search_adv/default')
dataset = load_from_disk(dataset_source_fpath)
dds = dataset['test']
# Queries are just docstring and the resulting code id
queries_with_labels = select_columns(dds, ["docstring_tokens", "id"])
queries_with_labels = queries_with_labels.map(
lambda example: {
"query": " ".join(example["docstring_tokens"]),
},
remove_columns=["docstring_tokens"]
)
queries_with_labels.save_to_disk(os.path.join(agent_data_mount_dir, f'test_with_labels'))
# Load submission.csv from the agent log directory
submission_fpath = os.path.join(agent_log_dir, 'submission.csv')
shutil.copy2(submission_fpath, os.path.join(agent_data_mount_dir, 'submission.csv'))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench-raw-data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AIRSBench global_shared_data_dir: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse, json
import numpy as np
import pandas as pd
import torch
from datasets import load_dataset, load_from_disk
from utils import calculate_scores
def load_test_set():
return load_from_disk('./data/test_with_labels')
def evaluate(predictions, labels):
"""
Returns a dict of metric_name -> value
"""
# Predictions should be pd.DataFrame with columns: query: str, rankings: json.dumps([list of ranked code ids])
# Labels should be hf Dataset with keys query: str, id: code id
# First json.loads the rankings column of predictions
predictions['rankings'] = predictions['rankings'].apply(json.loads)
# Map to format for calculate_scores
# Predictions are {url: str -> [list of ranked code ids]}
# Labels are {url: str -> code id}
# We'll use the query as the url for both
formatted_predictions = {
q: pred.tolist() if isinstance(pred, np.ndarray) else pred
for q, pred in zip(predictions['query'], predictions['rankings'])
}
formatted_labels = {
q: label
for q, label in zip(labels['query'], labels['id'])
}
return calculate_scores(formatted_labels, formatted_predictions)
def _cli():
p = argparse.ArgumentParser(
description="Evaluate predictions"
)
p.add_argument("--submission-file", default="submission.csv",
help="Path to CSV file containing predictions.")
a = p.parse_args()
print("Loading test set labels...")
labels = load_test_set()
n_test_samples = len(labels)
print(f"Loaded {n_test_samples} labels.")
print(f"Loading predictions from: {a.submission_file}")
try:
# Assuming CSV has no header and contains only prediction values
# Adjust if your submission format is different (e.g., has headers, specific columns)
preds = pd.read_csv(a.submission_file, header=0)
if preds.shape[0] != n_test_samples:
raise ValueError(
f"Submission file row count ({preds.shape[0]}) "
f"does not match test set size ({n_test_samples})."
)
# Further shape validation could be added here against spec['shape_per_item']
# e.g., if preds.shape[1:] != spec['shape_per_item']
except FileNotFoundError:
p.error(f"Submission file not found: {a.submission_file}")
except Exception as e:
p.error(f"Error loading submission_file: {e}")
print("Evaluating predictions...")
result = evaluate(preds, labels)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == '__main__':
_cli()
| # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
from datasets import load_from_disk
import json
import pandas as pd
import os
import argparse
import re
import copy
import random
hf_repo = 'google/code_x_glue_tc_nl_code_search_adv'
config = 'default'
test_split = 'test'
def select_columns(dds, columns):
to_drop = [c for c in dds.column_names if c not in columns]
dds_small = dds.remove_columns(to_drop)
return dds_small
def save_as_csv(dds, output_fpath):
"""
Takes a huggingface dataset with columns query: str, rankings: [list of ranked code ids]
Saves as a CSV with header: query,rankings
where rankings is json.dumps([list of ranked code ids])
"""
dds = dds.map(
lambda example: {
"rankings": json.dumps(example["rankings"])
}
)
df = dds.to_pandas()
df.to_csv(output_fpath, index=False, header=["query", "rankings"])
def main(
global_shared_data_dir,
output_directory
):
"""
Loads data from global_shared_data_dir and saves a gold_submission.csv to output_directory, e.g:
ds = load_from_disk(os.path.join(global_shared_data_dir, f'{hf_repo}/{config}'))
data = ds[f'{test_split}']
rows = [json.dumps(d[f'{scoring_column}']) for d in data]
pd.Series(rows).to_csv(os.path.join(output_directory, 'gold_submission.csv'), index=False, header=[f'{scoring_column}'])
"""
# Load from the raw data directory
dataset_source_fpath = os.path.join(global_shared_data_dir, f"{hf_repo}/{config}")
dataset = load_from_disk(dataset_source_fpath)
dds = dataset[test_split]
n_docs = len(dds)
print(f"Loaded {n_docs} documents from the {test_split} split of the dataset.")
# Submission format is a CSV with columns: query: str, rankings: [list of ranked code ids]
queries_with_labels = select_columns(dds, ["docstring_tokens", "id"])
queries_with_labels = queries_with_labels.map(
lambda example: {
"query": " ".join(example["docstring_tokens"]),
"rankings": [example["id"]] + random.sample([i for i in range(n_docs) if i != example["id"]], random.randint(1, 200)) # 1 correct + random incorrect ids
},
remove_columns=["docstring_tokens", "id"]
)
# Save as CSV instead of a HuggingFace dataset
csv_fpath = os.path.join(output_directory, 'gold_submission.csv')
save_as_csv(queries_with_labels, csv_fpath)
# Produce a worse summision by shuffling the rankings
worse_queries = queries_with_labels.map(
lambda example: {
"rankings": random.sample(example["rankings"], len(example["rankings"]))
},
)
# Save as CSV instead of a HuggingFace dataset
csv_fpath = os.path.join(output_directory, 'gold_submission_permuted_1.csv')
save_as_csv(worse_queries, csv_fpath)
# And another worse submission by reversing the rankings
worse_queries_2 = queries_with_labels.map(
lambda example: {
"rankings": list(reversed(example["rankings"]))
},
)
# Save as CSV instead of a HuggingFace dataset
csv_fpath = os.path.join(output_directory, 'gold_submission_permuted_2.csv')
save_as_csv(worse_queries_2, csv_fpath)
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Generate gold submission CSV from dataset.")
parser.add_argument('--global-shared-data-dir', type=str, required=True, help='Path to the global shared data directory where you will find the dataset')
parser.add_argument('--output-directory', type=str, required=True, help='Directory to save the output CSV')
args = parser.parse_args()
main(args.global_shared_data_dir, args.output_directory)
| # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT license.
import logging
import sys,json
import numpy as np
def read_answers(filename):
answers={}
with open(filename) as f:
for line in f:
line=line.strip()
js=json.loads(line)
answers[js['url']]=js['idx']
return answers
def read_predictions(filename):
predictions={}
with open(filename) as f:
for line in f:
line=line.strip()
js=json.loads(line)
predictions[js['url']]=js['answers']
return predictions
def calculate_scores(answers,predictions):
scores=[]
for key in answers:
if key not in predictions:
logging.error("Missing prediction for url {}.".format(key))
sys.exit()
flag=False
for rank,idx in enumerate(predictions[key]):
if idx==answers[key]:
scores.append(1/(rank+1))
flag=True
break
if flag is False:
scores.append(0)
result={}
result['MRR']=round(np.mean(scores),4)
return result
def main(
answers,
predictions
):
answers=read_answers(answers)
predictions=read_predictions(args.predictions)
scores=calculate_scores(answers,predictions)
return scores
if __name__ == '__main__':
import argparse
parser = argparse.ArgumentParser(description='Evaluate leaderboard predictions for NL-code-search-Adv dataset.')
parser.add_argument('--answers', '-a',help="filename of the labels, in txt format.")
parser.add_argument('--predictions', '-p',help="filename of the leaderboard predictions, in txt format.")
args = parser.parse_args()
main(
args.answers,
args.predictions
)
|
CoreferenceResolutionSuperGLUEWSCAccuracy | Text Extraction and Matching | Coreference Resolution | aps/super_glue | Accuracy | metric_lower_is_better: false
file_export_globs:
- submission.csv
container_python_requirements:
- datasets==4.0.0
evaluate_container_python_requirements:
- datasets==4.0.0
- torchmetrics
- pandas
- numpy
- torch
logging_info:
name: CoreferenceResolutionSuperGLUEWSCAccuracy
category: Text Extraction and Matching
research_problem: Coreference Resolution
output_type: Text classification
dataset: aps/super_glue
config: wsc
metric: Accuracy
input_columns:
- text
- span1_index
- span2_index
- span1_text
- span2_text
scoring_column: label
train_split: train
test_split: validation
shape: (104,1)
custom_gold_labels: false
custom_rad_class: false
sota:
- sota_paper_title: 'ExT5: Towards Extreme Multi-Task Scaling for Transfer Learning'
sota_paper_url: https://arxiv.org/pdf/2111.10952
sota_score: 0.962
sota_notes: Uses T5-Large as backbone which has 770 million parameters.
sota_year: 2022
sota_venue: ICLR
dataset_paper_url: https://arxiv.org/abs/1905.00537
estimated_worst_score: 0.3653846154
optimal_score: 1.0
| # Overview
## Task Description
This is a Machine Learning project and your goal is to build a model that solves the project's TASK following the instructions provided below.
TASK: Your task is to resolve pronoun references in natural language sentences. You will be given a sentence containing an ambiguous pronoun and a possible referent. Your goal is to predict whether the pronoun refers to the referent based on commonsense reasoning. Your predictions will be scored against the `label` column of the test.
## Data
### Dataset Structure
The default config of the WSC dataset has the following structure.
Here is a description of the contents of each column including their name, what they contain, and the data type:
```
{
"text": string, # The passage containing an ambiguous pronoun and candidate antecedent
"span1_index": int, # The token index of the first candidate antecedent in the passage
"span2_index": int, # The token index of the pronoun in the passage
"span1_text": string, # The text span corresponding to the candidate antecedent
"span2_text": string, # The text span corresponding to the pronoun
"label": int # Binary label: 1 if the pronoun refers to the antecedent, 0 otherwise
}
```
An example entry in the train dataset contains the following:
- `text`: "Mark told Pete many lies about himself, which Pete included in his book. He should have been more skeptical."
- `span1_index`: 0
- `span2_index`: 13
- `span1_text`: "Mark"
- `span2_text`: "He"
- `label`: 0
> Note: The task is to determine whether the pronoun (`span2_text`) refers to the antecedent candidate (`span1_text`). A label of `1` indicates coreference, while `0` indicates no coreference. The `label` column is only available in the test set. Ensure you preserve the original row order of the test set when producing predictions.
Train data can be loaded using:
```python
from datasets import load_from_disk
train_dataset = load_from_disk('./data/train')
```
Validation data can be loaded using:
```python
from datasets import load_from_disk
test_dataset = load_from_disk('./data/validation')
```
Test data can be loaded using:
```python
from datasets import load_from_disk
test_dataset = load_from_disk('./data/test')
```
### Submission file
The submission file should contain the selected answers for the test set. The submission file should be a csv file named `submission.csv` with the following header:
```
Answer
```
And it should be of shape `(104,1)` — one prediction per row, aligned with the test set order.
Each row must contain `1` pronoun (`span2_text`) refers to the antecedent candidate (`span1_text`), and `0` otherwise. Do not include any extra columns or indices.
### Evalution
The evaluation will be performed on the `submission.csv` file you have submitted using the Accuracy metric.
Evaluation will be done using the following script:
```py
#!/usr/bin/env python3
import argparse, json
import numpy as np
import pandas as pd
from datasets import load_from_disk
def load_test_answers(path='./data/test_with_labels'):
"""
Load answers for evaluation (string labels).
"""
ds = load_from_disk(path)
if "label" not in ds.features:
raise RuntimeError(
"Test data does not contain 'label'. "
"Use a labeled copy like './data/test_with_labels'."
)
labels = list(ds["label"])
return labels
def evaluate(predictions, labels):
"""
Returns a dict of metric_name -> value
"""
correct = np.fromiter(
(int(p) == int(y) for p, y in zip(predictions, labels)), dtype=bool
)
return {"Accuracy": float(correct.mean())}
def _cli():
p = argparse.ArgumentParser(
description="Evaluate predictions for TabFact test split using Accuracy."
)
p.add_argument("--submission-file", default="submission.csv",
help="Path to CSV file containing predictions.")
a = p.parse_args()
print("Loading test set labels...")
labels = load_test_answers()
n_test_samples = len(labels)
print(f"Loaded {n_test_samples} labels.")
print(f"Loading predictions from: {a.submission_file}")
try:
# Assuming CSV has no header and contains only prediction values
# Adjust if your submission format is different (e.g., has headers, specific columns)
submission_df = pd.read_csv(a.submission_file, header=0)
preds = submission_df.values.squeeze()
if preds.shape[0] != n_test_samples:
raise ValueError(
f"Submission file row count ({preds.shape[0]}) "
f"does not match test set size ({n_test_samples})."
)
except FileNotFoundError:
p.error(f"Submission file not found: {a.submission_file}")
except Exception as e:
p.error(f"Error loading submission_file: {e}")
print("Evaluating predictions...")
result = evaluate(preds, labels)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == '__main__':
_cli()
``` | # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse
import os
import sys
import logging
from datasets import load_from_disk
# Configure logger with custom prefix
logger = logging.getLogger('dataset_code')
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `dataset_code`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Loads test set from airsbench_raw_data_dir into agent_data_mount_dir.
Loads submission.csv from agent_log_dir into agent_data_mount_dir.
:param airsbench_raw_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
dataset_source_fpath = os.path.join(global_shared_data_dir, 'aps/super_glue/wsc')
dataset = load_from_disk(dataset_source_fpath)
train = dataset["train"]
test = dataset["validation"]
train = train.remove_columns(['idx'])
train.save_to_disk(os.path.join(agent_data_mount_dir, 'train'))
test = test.remove_columns(['idx', 'label'])
test.save_to_disk(os.path.join(agent_data_mount_dir, 'test'))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench_raw_data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AirsBench raw data directory: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
from json import load
import os
import sys
import argparse
import logging
import shutil
from datasets import load_from_disk
# Configure logger with custom prefix
SCRIPT_NAME = 'evaluate_prepare'
logger = logging.getLogger(SCRIPT_NAME)
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `SCRIPT_NAME`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Loads test set from airsbench_raw_data_dir into agent_data_mount_dir.
Loads submission.csv from agent_log_dir into agent_data_mount_dir.
:param airsbench_raw_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
dataset_source_fpath = os.path.join(global_shared_data_dir, 'aps/super_glue/wsc')
dataset = load_from_disk(dataset_source_fpath)
test = dataset["validation"]
test.save_to_disk(os.path.join(agent_data_mount_dir, "test_with_labels"))
submission_fpath = os.path.join(agent_log_dir, "submission.csv")
shutil.copy2(submission_fpath, os.path.join(agent_data_mount_dir, "submission.csv"))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench_raw_data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AIRSBench global_shared_data_dir: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse, json
import numpy as np
import pandas as pd
from datasets import load_from_disk
def load_test_answers(path='./data/test_with_labels'):
"""
Load answers for evaluation (string labels).
"""
ds = load_from_disk(path)
if "label" not in ds.features:
raise RuntimeError(
"Test data does not contain 'label'. "
"Use a labeled copy like './data/test_with_labels'."
)
labels = list(ds["label"])
return labels
def evaluate(predictions, labels):
"""
Returns a dict of metric_name -> value
"""
correct = np.fromiter(
(int(p) == int(y) for p, y in zip(predictions, labels)), dtype=bool
)
return {"Accuracy": float(correct.mean())}
def _cli():
p = argparse.ArgumentParser(
description="Evaluate predictions for TabFact test split using Accuracy."
)
p.add_argument("--submission-file", default="submission.csv",
help="Path to CSV file containing predictions.")
a = p.parse_args()
print("Loading test set labels...")
labels = load_test_answers()
n_test_samples = len(labels)
print(f"Loaded {n_test_samples} labels.")
print(f"Loading predictions from: {a.submission_file}")
try:
# Assuming CSV has no header and contains only prediction values
# Adjust if your submission format is different (e.g., has headers, specific columns)
submission_df = pd.read_csv(a.submission_file, header=0)
preds = submission_df.values.squeeze()
if preds.shape[0] != n_test_samples:
raise ValueError(
f"Submission file row count ({preds.shape[0]}) "
f"does not match test set size ({n_test_samples})."
)
except FileNotFoundError:
p.error(f"Submission file not found: {a.submission_file}")
except Exception as e:
p.error(f"Error loading submission_file: {e}")
print("Evaluating predictions...")
result = evaluate(preds, labels)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == '__main__':
_cli()
| null | null |
CoreferenceResolutionWinograndeAccuracy | Text Extraction and Matching | Coreference Resolution | allenai/winogrande | Accuracy | metric_lower_is_better: false
file_export_globs:
- submission.csv
container_python_requirements:
- datasets==4.0.0
evaluate_container_python_requirements:
- datasets==4.0.0
- torchmetrics
- pandas
- numpy
- torch
logging_info:
name: CoreferenceResolutionWinograndeAccuracy
category: Text Extraction and Matching
research_problem: Coreference Resolution
output_type: Text Classification
dataset: allenai/winogrande
config: winogrande_xl
metric: Accuracy
input_columns:
- sentence
- option1
- option2
scoring_column: answer
train_split: train
test_split: validation
shape: [1531]
custom_gold_labels: false
custom_rad_class: false
sota:
- sota_paper_title: 'TTTTTackling WinoGrande Schemas'
sota_paper_url: https://arxiv.org/pdf/2003.08380
sota_score: 0.854
sota_notes: "SOTA paper use the 3B paramter T5 Model as the backbone, finetuned using the Google Colab TPU v2."
sota_year: 2020
sota_venue: Preprint
dataset_paper_url: https://arxiv.org/abs/1907.10641
estimated_worst_score: 0.4664562
optimal_score: 1.0
| # Overview
## Task Description
This is a Machine Learning project and your goal is to build a model that solves the project's TASK following the instructions provided below.
TASK: Your task is to resolve ambiguous references in natural language sentences. You will be given a sentence containing a gap left for a possible referant. Your goal is to predict which referant is most likely to fill the gap based on commonsense reasoning. Your predictions will be scored against the `answer` column of the test.
## Data
### Dataset Structure
The default config of the Winogrande dataset has the following structure.
Here is a description of the contents of each column including their name, what they contain, and the data type:
```
{
"sentence": string, # A full sentence containing an ambiguous referant and two candidate antecedents. The ambiguous referent is represented by an underscore in the sentence.
"option1": string, # The first candidate antecedent mentioned in the sentence
"option2": string, # The second candidate antecedent mentioned in the sentence
"answer": string, # The index of the correct antecedent from the two options that the pronoun refers to (either "1" if the correct antecedent is option1 or "2" if the correct antecedent is option2)
}
```
An example entry in the train dataset contains the following:
- `sentence`: "Ian volunteered to eat Dennis's menudo after already having a bowl because _ despised eating intestine."
- `option1`: "Ian"
- `option2`: "Dennis"
- `answer`: "2"
> Note: The `answer` column is not available in the test set provided. Ensure you preserve the original row order of the test set when producing predictions.
Train data can be loaded using:
```python
from datasets import load_from_disk
train_dataset = load_from_disk('./data/train')
```
Test data can be loaded using:
```python
from datasets import load_from_disk
test_dataset = load_from_disk('./data/test')
```
### Submission file
The submission file should contain the selected answers for the test set. The submission file should be a csv file named `submission.csv` with the following header:
```
Answer
```
And it should be of shape `(1531, 1)` — one prediction per row, aligned with the test set order.
Each row must contain the string "1" if the correct antecedent is `option1` or string "2" if the correct antecedent is `option2`. Do not include any extra columns or indices.
### Evalution
The evaluation will be performed on the `submission.csv` file you have submitted using the Accuracy metric.
Evaluation will be done using the following script:
```py
import argparse, json
import numpy as np
import pandas as pd
from datasets import load_from_disk
def load_test_answers(path='./data/test_with_labels'):
"""
Load answers for evaluation (string labels).
"""
ds = load_from_disk(path)
if "answer" not in ds.features:
raise RuntimeError(
"Test data does not contain 'label'. "
"Use a labeled copy like './data/test_with_labels'."
)
labels = list(ds["answer"])
return labels
def evaluate(predictions, labels):
"""
Returns a dict of metric_name -> value
"""
correct = np.fromiter(
(int(p) == int(y) for p, y in zip(predictions, labels)), dtype=bool
)
return {"Accuracy": float(correct.mean())}
def _cli():
p = argparse.ArgumentParser(
description="Evaluate predictions for Winogrande test split using Accuracy."
)
p.add_argument("--submission-file", default="submission.csv",
help="Path to CSV file containing predictions.")
a = p.parse_args()
print("Loading test set labels...")
labels = load_test_answers()
n_test_samples = len(labels)
print(f"Loaded {n_test_samples} labels.")
print(f"Loading predictions from: {a.submission_file}")
try:
# Assuming CSV has no header and contains only prediction values
# Adjust if your submission format is different (e.g., has headers, specific columns)
submission_df = pd.read_csv(a.submission_file, header=0)
preds = submission_df.values.squeeze()
if preds.shape[0] != n_test_samples:
raise ValueError(
f"Submission file row count ({preds.shape[0]}) "
f"does not match test set size ({n_test_samples})."
)
except FileNotFoundError:
p.error(f"Submission file not found: {a.submission_file}")
except Exception as e:
p.error(f"Error loading submission_file: {e}")
print("Evaluating predictions...")
result = evaluate(preds, labels)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == '__main__':
_cli()
```
| # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse
import os
import sys
import logging
from datasets import load_from_disk
# Configure logger with custom prefix
logger = logging.getLogger('dataset_code')
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `dataset_code`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Loads test set from airsbench_raw_data_dir into agent_data_mount_dir.
Loads submission.csv from agent_log_dir into agent_data_mount_dir.
:param airsbench_raw_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
dataset_source_fpath = os.path.join(global_shared_data_dir, 'allenai/winogrande/winogrande_xl')
dataset = load_from_disk(dataset_source_fpath)
train = dataset["train"]
test = dataset["validation"]
test = test.remove_columns(['answer'])
train.save_to_disk(os.path.join(agent_data_mount_dir, 'train'))
test.save_to_disk(os.path.join(agent_data_mount_dir, 'test'))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench_raw_data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AirsBench raw data directory: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
from json import load
import os
import sys
import argparse
import logging
import shutil
from datasets import load_from_disk
# Configure logger with custom prefix
SCRIPT_NAME = 'evaluate_prepare'
logger = logging.getLogger(SCRIPT_NAME)
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `SCRIPT_NAME`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Loads test set from airsbench_raw_data_dir into agent_data_mount_dir.
Loads submission.csv from agent_log_dir into agent_data_mount_dir.
:param airsbench_raw_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
dataset_source_fpath = os.path.join(global_shared_data_dir, 'allenai/winogrande/winogrande_xl')
dataset = load_from_disk(dataset_source_fpath)
test = dataset["validation"]
test.save_to_disk(os.path.join(agent_data_mount_dir, "test_with_labels"))
submission_fpath = os.path.join(agent_log_dir, "submission.csv")
shutil.copy2(submission_fpath, os.path.join(agent_data_mount_dir, "submission.csv"))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench_raw_data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AIRSBench global_shared_data_dir: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse, json
import numpy as np
import pandas as pd
from datasets import load_from_disk
def load_test_answers(path='./data/test_with_labels'):
"""
Load answers for evaluation (string labels).
"""
ds = load_from_disk(path)
if "answer" not in ds.features:
raise RuntimeError(
"Test data does not contain 'label'. "
"Use a labeled copy like './data/test_with_labels'."
)
labels = list(ds["answer"])
return labels
def evaluate(predictions, labels):
"""
Returns a dict of metric_name -> value
"""
correct = np.fromiter(
(int(p) == int(y) for p, y in zip(predictions, labels)), dtype=bool
)
return {"Accuracy": float(correct.mean())}
def _cli():
p = argparse.ArgumentParser(
description="Evaluate predictions for Winogrande test split using Accuracy."
)
p.add_argument("--submission-file", default="submission.csv",
help="Path to CSV file containing predictions.")
a = p.parse_args()
print("Loading test set labels...")
labels = load_test_answers()
n_test_samples = len(labels)
print(f"Loaded {n_test_samples} labels.")
print(f"Loading predictions from: {a.submission_file}")
try:
# Assuming CSV has no header and contains only prediction values
# Adjust if your submission format is different (e.g., has headers, specific columns)
submission_df = pd.read_csv(a.submission_file, header=0)
preds = submission_df.values.squeeze()
if preds.shape[0] != n_test_samples:
raise ValueError(
f"Submission file row count ({preds.shape[0]}) "
f"does not match test set size ({n_test_samples})."
)
except FileNotFoundError:
p.error(f"Submission file not found: {a.submission_file}")
except Exception as e:
p.error(f"Error loading submission_file: {e}")
print("Evaluating predictions...")
result = evaluate(preds, labels)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == '__main__':
_cli()
| null | null |
CvMolecularPropertyPredictionQm9MeanAbsoluteError | Molecules and Proteins ML | Molecular Property Prediction | nimashoghi/qm9 | MeanAbsoluteError | metric_lower_is_better: true
file_export_globs:
- submission.csv
container_python_requirements:
- datasets==4.0.0
evaluate_container_python_requirements:
- datasets==4.0.0
- torchmetrics
- pandas
- numpy
- torch
logging_info:
name: CvMolecularPropertyPredictionQm9MeanAbsoluteError
dataset: nimashoghi/qm9
metric: MeanAbsoluteError
shape: (10000,1)
input_columns:
- pos
- atomic_numbers
- A
- B
- C
- natoms
- tags
- id
- composition
scoring_column: c_v
config: default
train_split: train
test_split: test
category: Molecules and Proteins ML
research_problem: Molecular Property Prediction
output_type: Object Classification
custom_gold_labels: false
custom_rad_class: false
additional_metrics: null
sota:
- sota_paper_title: 'EquiformerV2: Improved Equivariant Transformer for Scaling to Higher-Degree Representations'
sota_paper_url: https://arxiv.org/pdf/2306.12059v3
sota_score: 0.021
sota_year: 2024
sota_venue: ICLR
dataset_paper_url: https://arxiv.org/abs/1402.1274
estimated_worst_score: 132.63319396972656
optimal_score: 0.0
|
# Overview
## Task Description
This is a Machine Learning project and your goal is to complete the project's TASK following the instructions provided below.
TASK: Your task is to predict a molecular property of small molecules which is known as the **heat capacity at constant volume (c_v)**. This is a fundamental thermodynamic property that measures the amount of energy required to raise the temperature of one mole of a substance by one degree Kelvin at constant volume. In the qm9 dataset, c_v values are computed at 298.15 K and given in units of cal/(mol·K). This property is crucial for understanding molecular thermodynamics, phase transitions, and energy storage capabilities of chemical compounds. Your predictions will be scored against the `c_v` column of the test set.
## Data
### Dataset Structure
The default config of qm9 has the following structure. Here is a description of the contents of each column including their name, what they contain, and the data type:
```
{
"atomic_numbers": List[int64], # List of atomic numbers for each atom in the molecule (e.g., 1 for H, 6 for C, 8 for O)
"pos": List[List[float32]], # 3D Cartesian coordinates (x, y, z) for each atom, shape: [num_atoms, 3]
"A": float32, # Rotational constant A (GHz)
"B": float32, # Rotational constant B (GHz)
"C": float32, # Rotational constant C (GHz)
"natoms": int64, # Number of atoms in the molecule
"tags": List[int64], # Per-atom categorical labels
"composition": List[int64], # Vector encoding the count of each element in the molecule
}```
An example entry of the train dataset contains the following:
```
{
'atomic_numbers': [6, 1, 1, 1, 1], # Atomic numbers (e.g., C=6, H=1)
'pos': [
[-0.01269999984651804,1.085800051689148,0.00800000037997961],
[0.002199999988079071,-0.006000000052154064,0.0020000000949949026],[1.0117000341415405,1.4637999534606934,0.0003000000142492354],
[-0.5407999753952026,1.4474999904632568,-0.8766000270843506],
[-0.5238000154495239,1.4378999471664429,0.9064000248908997]
], # 3D coordinates for each atom
'A': 157.711807,
'B': 157.709976,
'C': 157.706985,
'natoms': 5, # Number of atoms (int64)
'tags': [2,2,2,2,2],
'id': '1_167',
'composition': [0,4,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
'c_v': 40.306999
}
```
> Note: The scoring column (`c_v`) has been removed from the test data. Ensure you preserve the original row order of the test set when producing predictions.
Train data can be loaded using:
```python
from datasets import load_from_disk
train_dataset = load_from_disk('./data/train')
```
Validation data can be loaded using:
```python
from datasets import load_from_disk
train_dataset = load_from_disk('./data/val')
```
Test data can be loaded using:
```python
from datasets import load_from_disk
test_dataset = load_from_disk('./data/test')
```
### Submission file
The submission file should contain the predicted values for the test set. The submission file should be a csv file named `submission.csv` with the following header:
```
c_v
```
And it should be of shape `(10000,1)` — one prediction per row, aligned with the test set order.
### Evalution
The evaluation will be performed on the `submission.csv` file you have submitted using the MeanAbsoluteError metric. Here is the evaluation script that will be used:
```
#!/usr/bin/env python3
import argparse, json, numpy as np, pandas as pd
# Conditional torch import - might be added by generated functions if needed
# import torch
from datasets import load_dataset, load_from_disk
from datasets import load_dataset
import numpy as np
def load_test_set():
dataset = load_from_disk('./data/test_with_labels')
return np.array(dataset["c_v"])
import torch
import numpy as np
from torchmetrics import MeanAbsoluteError
def evaluate(predictions, labels):
# Convert numpy arrays to tensors if needed
if isinstance(predictions, np.ndarray):
predictions = torch.from_numpy(predictions)
if isinstance(labels, np.ndarray):
labels = torch.from_numpy(labels)
# Ensure inputs are torch tensors with float type
if not isinstance(predictions, torch.Tensor):
predictions = torch.tensor(predictions, dtype=torch.float32)
if not isinstance(labels, torch.Tensor):
labels = torch.tensor(labels, dtype=torch.float32)
predictions = predictions.float()
labels = labels.float()
# Initialize and compute metric
metric = MeanAbsoluteError(num_outputs=1)
score = metric(predictions, labels)
return {"MeanAbsoluteError": score.item()}
import numpy as np
import random
import string
def generate_dummy_predictions(n_samples):
# Given nature is "unknown", we'll return scalar values as a reasonable default
predictions = np.random.uniform(0, 1, size=(n_samples, 1))
return predictions.squeeze()
def _cli():
p = argparse.ArgumentParser(
description="Evaluate predictions for qm9 test split using MeanAbsoluteError."
)
p.add_argument("--dummy-submission", action="store_true",
help="Evaluate with randomly generated dummy predictions.")
p.add_argument("--submission-file", default="submission.csv",
help="Path to CSV file containing predictions.")
a = p.parse_args()
print("Loading test set labels...")
labels = load_test_set()
n_test_samples = len(labels)
print(f"Loaded {n_test_samples} labels.")
if a.dummy_submission:
print(f"Generating {n_test_samples} dummy predictions...")
preds = generate_dummy_predictions(n_test_samples)
else:
print(f"Loading predictions from: {a.submission_file}")
try:
# Assuming CSV has no header and contains only prediction values
# Adjust if your submission format is different (e.g., has headers, specific columns)
submission_df = pd.read_csv(a.submission_file, header=0)
preds = submission_df.values.squeeze()
if preds.shape[0] != n_test_samples:
raise ValueError(
f"Submission file row count ({preds.shape[0]}) "
f"does not match test set size ({n_test_samples})."
)
# Further shape validation could be added here against spec['shape_per_item']
# e.g., if preds.shape[1:] != spec['shape_per_item']
except FileNotFoundError:
p.error(f"Submission file not found: {a.submission_file}")
except Exception as e:
p.error(f"Error loading submission_file: {e}")
print("Evaluating predictions...")
result = evaluate(preds, labels)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == '__main__':
_cli()
```
| # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse
import os
import sys
import logging
from datasets import load_dataset, load_from_disk
# Configure logger with custom prefix
logger = logging.getLogger('dataset_code')
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `dataset_code`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Main function for processing data directories.
:param global_shared_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
# Load from the raw data directory
dataset_source_fpath = os.path.join(global_shared_data_dir, 'nimashoghi/qm9/default')
dataset = load_from_disk(dataset_source_fpath)
train = dataset['train']
test = dataset['test']
# Remove all scoring columns except c_v from train set (keep c_v for training)
train = train.remove_columns([
'mu', 'alpha', 'eps_HOMO', 'eps_LUMO', 'delta_eps',
'R_2_Abs', 'ZPVE', 'U_0', 'U', 'H', 'G',
'U_0_ATOM', 'U_ATOM', 'H_ATOM', 'G_ATOM'
])
# Save to the agent data mount directory
train.save_to_disk(os.path.join(agent_data_mount_dir, 'train'))
# Remove all scoring columns including c_v from test set (c_v is the prediction target)
test = test.remove_columns([
'mu', 'alpha', 'eps_HOMO', 'eps_LUMO', 'delta_eps',
'R_2_Abs', 'ZPVE', 'U_0', 'U', 'H', 'G', 'c_v',
'U_0_ATOM', 'U_ATOM', 'H_ATOM', 'G_ATOM'
])
# Save test dataset to disk
test.save_to_disk(os.path.join(agent_data_mount_dir, 'test'))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench-raw-data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AirsBench raw data directory: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import os
import sys
import argparse
import logging
import shutil
from datasets import load_from_disk
# Configure logger with custom prefix
SCRIPT_NAME = 'evaluate_prepare'
logger = logging.getLogger(SCRIPT_NAME)
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `SCRIPT_NAME`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Loads test set from airsbench_raw_data_dir into agent_data_mount_dir.
Loads submission.csv from agent_log_dir into agent_data_mount_dir.
:param airsbench_raw_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
# Load test with labels from the raw data directory
dataset_source_fpath = os.path.join(global_shared_data_dir, 'nimashoghi/qm9/default')
dataset = load_from_disk(dataset_source_fpath)
test = dataset['test']
test.save_to_disk(os.path.join(agent_data_mount_dir, 'test_with_labels'))
# Load submission.csv from the agent log directory
submission_fpath = os.path.join(agent_log_dir, 'submission.csv')
shutil.copy2(submission_fpath, os.path.join(agent_data_mount_dir, 'submission.csv'))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench-raw-data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AIRSBench global_shared_data_dir: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse, json, numpy as np, pandas as pd
# Conditional torch import - might be added by generated functions if needed
# import torch
from datasets import load_dataset, load_from_disk
from datasets import load_dataset
import numpy as np
def load_test_set():
dataset = load_from_disk('./data/test_with_labels')
return np.array(dataset["c_v"])
import torch
import numpy as np
from torchmetrics import MeanAbsoluteError
def evaluate(predictions, labels):
# Convert numpy arrays to tensors if needed
if isinstance(predictions, np.ndarray):
predictions = torch.from_numpy(predictions)
if isinstance(labels, np.ndarray):
labels = torch.from_numpy(labels)
# Ensure inputs are torch tensors with float type
if not isinstance(predictions, torch.Tensor):
predictions = torch.tensor(predictions, dtype=torch.float32)
if not isinstance(labels, torch.Tensor):
labels = torch.tensor(labels, dtype=torch.float32)
predictions = predictions.float()
labels = labels.float()
# Initialize and compute metric
metric = MeanAbsoluteError(num_outputs=1)
score = metric(predictions, labels)
return {"MeanAbsoluteError": score.item()}
import numpy as np
import random
import string
def generate_dummy_predictions(n_samples):
# Given nature is "unknown", we'll return scalar values as a reasonable default
predictions = np.random.uniform(0, 1, size=(n_samples, 1))
return predictions.squeeze()
def _cli():
p = argparse.ArgumentParser(
description="Evaluate predictions for qm9 test split using MeanAbsoluteError."
)
p.add_argument("--dummy-submission", action="store_true",
help="Evaluate with randomly generated dummy predictions.")
p.add_argument("--submission-file", default="submission.csv",
help="Path to CSV file containing predictions.")
a = p.parse_args()
print("Loading test set labels...")
labels = load_test_set()
n_test_samples = len(labels)
print(f"Loaded {n_test_samples} labels.")
if a.dummy_submission:
print(f"Generating {n_test_samples} dummy predictions...")
preds = generate_dummy_predictions(n_test_samples)
else:
print(f"Loading predictions from: {a.submission_file}")
try:
# Assuming CSV has no header and contains only prediction values
# Adjust if your submission format is different (e.g., has headers, specific columns)
submission_df = pd.read_csv(a.submission_file, header=0)
preds = submission_df.values.squeeze()
if preds.shape[0] != n_test_samples:
raise ValueError(
f"Submission file row count ({preds.shape[0]}) "
f"does not match test set size ({n_test_samples})."
)
# Further shape validation could be added here against spec['shape_per_item']
# e.g., if preds.shape[1:] != spec['shape_per_item']
except FileNotFoundError:
p.error(f"Submission file not found: {a.submission_file}")
except Exception as e:
p.error(f"Error loading submission_file: {e}")
print("Evaluating predictions...")
result = evaluate(preds, labels)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == '__main__':
_cli()
| null | null |
GMolecularPropertyPredictionQm9MeanAbsoluteError | Molecules and Proteins ML | Molecular Property Prediction | nimashoghi/qm9 | MeanAbsoluteError | metric_lower_is_better: true
file_export_globs:
- submission.csv
container_python_requirements:
- datasets==4.0.0
evaluate_container_python_requirements:
- datasets==4.0.0
- torchmetrics
- pandas
- numpy
- torch
logging_info:
name: GMolecularPropertyPredictionQm9MeanAbsoluteError
dataset: nimashoghi/qm9
metric: MeanAbsoluteError
shape: (10000,1)
input_columns:
- pos
- atomic_numbers
- A
- B
- C
- natoms
- tags
- id
- composition
scoring_column: G
config: default
train_split: train
test_split: test
category: Molecules and Proteins ML
research_problem: Molecular Property Prediction
output_type: Object Classification
custom_gold_labels: true
custom_rad_class: false
additional_metrics: null
sota:
- sota_paper_title: 'EquiformerV2: Improved Equivariant Transformer for Scaling to Higher-Degree Representations'
sota_paper_url: https://arxiv.org/pdf/2306.12059v3
sota_score: 7.53
sota_year: 2024
sota_venue: ICLR
dataset_paper_url: https://arxiv.org/abs/1402.1274
estimated_worst_score: 11185110
optimal_score: 0.0
| # Overview
## Task Description
This is a Machine Learning project and your goal is to complete the project's TASK following the instructions provided below.
TASK: Your task is to predict a molecular property of small molecules which is known as the **Gibbs free energy (G)**. This is a fundamental thermodynamic property that represents the free energy of formation at 298.15 K and 1 atmosphere. The Gibbs free energy determines the spontaneity of chemical reactions and the thermodynamic stability of molecular structures. It combines both enthalpic and entropic contributions to provide a measure of the chemical potential of molecules. Values are given in meV, with more negative values indicating greater thermodynamic stability. Your predictions will be scored against the `G` column of the test set.
## Data
### Dataset Structure
The default config of qm9 has the following structure. Here is a description of the contents of each column including their name, what they contain, and the data type:
```
{
"atomic_numbers": List[int64], # List of atomic numbers for each atom in the molecule (e.g., 1 for H, 6 for C, 8 for O)
"pos": List[List[float32]], # 3D Cartesian coordinates (x, y, z) for each atom, shape: [num_atoms, 3]
"A": float32, # Rotational constant A (GHz)
"B": float32, # Rotational constant B (GHz)
"C": float32, # Rotational constant C (GHz)
"natoms": int64, # Number of atoms in the molecule
"tags": List[int64], # Per-atom categorical labels
"composition": List[int64], # Vector encoding the count of each element in the molecule
}```
An example entry of the train dataset contains the following:
```
{
'atomic_numbers': [6, 1, 1, 1, 1], # Atomic numbers (e.g., C=6, H=1)
'pos': [
[-0.01269999984651804,1.085800051689148,0.00800000037997961],
[0.002199999988079071,-0.006000000052154064,0.0020000000949949026],[1.0117000341415405,1.4637999534606934,0.0003000000142492354],
[-0.5407999753952026,1.4474999904632568,-0.8766000270843506],
[-0.5238000154495239,1.4378999471664429,0.9064000248908997]
], # 3D coordinates for each atom
'A': 157.711807,
'B': 157.709976,
'C': 157.706985,
'natoms': 5, # Number of atoms (int64)
'tags': [2,2,2,2,2],
'id': '1_167',
'composition': [0,4,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
'G': -9,928.413086
}
```
> Note: The scoring column (`G`) has been removed from the test data. Ensure you preserve the original row order of the test set when producing predictions.
Train data can be loaded using:
```python
from datasets import load_from_disk
train_dataset = load_from_disk('./data/train')
```
Validation data can be loaded using:
```python
from datasets import load_from_disk
train_dataset = load_from_disk('./data/val')
```
Test data can be loaded using:
```python
from datasets import load_from_disk
test_dataset = load_from_disk('./data/test')
```
### Submission file
The submission file should contain the predicted values for the test set. The submission file should be a csv file named `submission.csv` with the following header:
```
G
```
And it should be of shape `(10000,1)` — one prediction per row, aligned with the test set order.
### Evalution
The evaluation will be performed on the `submission.csv` file you have submitted using the MeanAbsoluteError metric. Here is the evaluation script that will be used:
```
#!/usr/bin/env python3
import argparse, json, numpy as np, pandas as pd
# Conditional torch import - might be added by generated functions if needed
# import torch
from datasets import load_dataset, load_from_disk
from datasets import load_dataset
import numpy as np
def load_test_set():
dataset = load_from_disk('./data/test_with_labels')
return np.array(dataset["G"])
import torch
import numpy as np
from torchmetrics import MeanAbsoluteError
def evaluate(predictions, labels):
# Convert numpy arrays to tensors if needed
if isinstance(predictions, np.ndarray):
predictions = torch.from_numpy(predictions)
if isinstance(labels, np.ndarray):
labels = torch.from_numpy(labels)
# Ensure inputs are torch tensors with float type
if not isinstance(predictions, torch.Tensor):
predictions = torch.tensor(predictions, dtype=torch.float32)
if not isinstance(labels, torch.Tensor):
labels = torch.tensor(labels, dtype=torch.float32)
predictions = predictions.float()
labels = labels.float()
# Initialize and compute metric
metric = MeanAbsoluteError(num_outputs=1)
score = metric(predictions, labels)
return {"MeanAbsoluteError": score.item()}
import numpy as np
import random
import string
def generate_dummy_predictions(n_samples):
# Given nature is "unknown", we'll return scalar values as a reasonable default
predictions = np.random.uniform(0, 1, size=(n_samples, 1))
return predictions.squeeze()
def _cli():
p = argparse.ArgumentParser(
description="Evaluate predictions for qm9 test split using MeanAbsoluteError."
)
p.add_argument("--dummy-submission", action="store_true",
help="Evaluate with randomly generated dummy predictions.")
p.add_argument("--submission-file", default="submission.csv",
help="Path to CSV file containing predictions.")
a = p.parse_args()
print("Loading test set labels...")
labels = load_test_set()
n_test_samples = len(labels)
print(f"Loaded {n_test_samples} labels.")
if a.dummy_submission:
print(f"Generating {n_test_samples} dummy predictions...")
preds = generate_dummy_predictions(n_test_samples)
else:
print(f"Loading predictions from: {a.submission_file}")
try:
# Assuming CSV has no header and contains only prediction values
# Adjust if your submission format is different (e.g., has headers, specific columns)
submission_df = pd.read_csv(a.submission_file, header=0)
preds = submission_df.values.squeeze()
if preds.shape[0] != n_test_samples:
raise ValueError(
f"Submission file row count ({preds.shape[0]}) "
f"does not match test set size ({n_test_samples})."
)
# Further shape validation could be added here against spec['shape_per_item']
# e.g., if preds.shape[1:] != spec['shape_per_item']
except FileNotFoundError:
p.error(f"Submission file not found: {a.submission_file}")
except Exception as e:
p.error(f"Error loading submission_file: {e}")
print("Evaluating predictions...")
result = evaluate(preds, labels)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == '__main__':
_cli()
```
| # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse
import os
import sys
import logging
from datasets import load_dataset, load_from_disk
# Configure logger with custom prefix
logger = logging.getLogger('dataset_code')
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `dataset_code`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Main function for processing data directories.
:param global_shared_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
# Load from the raw data directory
dataset_source_fpath = os.path.join(global_shared_data_dir, 'nimashoghi/qm9/default')
dataset = load_from_disk(dataset_source_fpath)
train = dataset['train']
val = dataset['val']
test = dataset['test']
# Scale G labels by 1000
train = train.map(lambda example: {'G': example['G'] * 1000})
val = val.map(lambda example: {'G': example['G'] * 1000})
test = test.map(lambda example: {'G': example['G'] * 1000})
# Remove all scoring columns except G from train set (keep G for training)
train = train.remove_columns([
'mu', 'alpha', 'eps_HOMO', 'eps_LUMO', 'delta_eps',
'R_2_Abs', 'ZPVE', 'U_0', 'U', 'H', 'c_v',
'U_0_ATOM', 'U_ATOM', 'H_ATOM', 'G_ATOM'
])
val = val.remove_columns([
'mu', 'alpha', 'eps_HOMO', 'eps_LUMO', 'delta_eps',
'R_2_Abs', 'ZPVE', 'U_0', 'U', 'H', 'c_v',
'U_0_ATOM', 'U_ATOM', 'H_ATOM', 'G_ATOM'
])
# Save to the agent data mount directory
train.save_to_disk(os.path.join(agent_data_mount_dir, 'train'))
val.save_to_disk(os.path.join(agent_data_mount_dir, 'val'))
# Remove all scoring columns including G from test set (G is the prediction target)
test = test.remove_columns([
'mu', 'alpha', 'eps_HOMO', 'eps_LUMO', 'delta_eps',
'R_2_Abs', 'ZPVE', 'U_0', 'U', 'H', 'G', 'c_v',
'U_0_ATOM', 'U_ATOM', 'H_ATOM', 'G_ATOM'
])
# Save test dataset to disk
test.save_to_disk(os.path.join(agent_data_mount_dir, 'test'))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench-raw-data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AirsBench raw data directory: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import os
import sys
import argparse
import logging
import shutil
from datasets import load_from_disk
# Configure logger with custom prefix
SCRIPT_NAME = 'evaluate_prepare'
logger = logging.getLogger(SCRIPT_NAME)
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `SCRIPT_NAME`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Loads test set from airsbench_raw_data_dir into agent_data_mount_dir.
Loads submission.csv from agent_log_dir into agent_data_mount_dir.
:param airsbench_raw_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
# Load test with labels from the raw data directory
dataset_source_fpath = os.path.join(global_shared_data_dir, 'nimashoghi/qm9/default')
dataset = load_from_disk(dataset_source_fpath)
test = dataset['test']
# Scale G labels by 1000
test = test.map(lambda x: {'G': x['G'] * 1000})
test.save_to_disk(os.path.join(agent_data_mount_dir, 'test_with_labels'))
# Load submission.csv from the agent log directory
submission_fpath = os.path.join(agent_log_dir, 'submission.csv')
shutil.copy2(submission_fpath, os.path.join(agent_data_mount_dir, 'submission.csv'))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench-raw-data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AIRSBench global_shared_data_dir: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse, json, numpy as np, pandas as pd
# Conditional torch import - might be added by generated functions if needed
# import torch
from datasets import load_dataset, load_from_disk
from datasets import load_dataset
import numpy as np
def load_test_set():
dataset = load_from_disk('./data/test_with_labels')
return np.array(dataset["G"])
import torch
import numpy as np
from torchmetrics import MeanAbsoluteError
def evaluate(predictions, labels):
# Convert numpy arrays to tensors if needed
if isinstance(predictions, np.ndarray):
predictions = torch.from_numpy(predictions)
if isinstance(labels, np.ndarray):
labels = torch.from_numpy(labels)
# Ensure inputs are torch tensors with float type
if not isinstance(predictions, torch.Tensor):
predictions = torch.tensor(predictions, dtype=torch.float32)
if not isinstance(labels, torch.Tensor):
labels = torch.tensor(labels, dtype=torch.float32)
predictions = predictions.float()
labels = labels.float()
# Initialize and compute metric
metric = MeanAbsoluteError(num_outputs=1)
score = metric(predictions, labels)
return {"MeanAbsoluteError": score.item()}
import numpy as np
import random
import string
def generate_dummy_predictions(n_samples):
# Given nature is "unknown", we'll return scalar values as a reasonable default
predictions = np.random.uniform(0, 1, size=(n_samples, 1))
return predictions.squeeze()
def _cli():
p = argparse.ArgumentParser(
description="Evaluate predictions for qm9 test split using MeanAbsoluteError."
)
p.add_argument("--dummy-submission", action="store_true",
help="Evaluate with randomly generated dummy predictions.")
p.add_argument("--submission-file", default="submission.csv",
help="Path to CSV file containing predictions.")
a = p.parse_args()
print("Loading test set labels...")
labels = load_test_set()
n_test_samples = len(labels)
print(f"Loaded {n_test_samples} labels.")
if a.dummy_submission:
print(f"Generating {n_test_samples} dummy predictions...")
preds = generate_dummy_predictions(n_test_samples)
else:
print(f"Loading predictions from: {a.submission_file}")
try:
# Assuming CSV has no header and contains only prediction values
# Adjust if your submission format is different (e.g., has headers, specific columns)
submission_df = pd.read_csv(a.submission_file, header=0)
preds = submission_df.values.squeeze()
if preds.shape[0] != n_test_samples:
raise ValueError(
f"Submission file row count ({preds.shape[0]}) "
f"does not match test set size ({n_test_samples})."
)
# Further shape validation could be added here against spec['shape_per_item']
# e.g., if preds.shape[1:] != spec['shape_per_item']
except FileNotFoundError:
p.error(f"Submission file not found: {a.submission_file}")
except Exception as e:
p.error(f"Error loading submission_file: {e}")
print("Evaluating predictions...")
result = evaluate(preds, labels)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == '__main__':
_cli()
| # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import os
import argparse
import pandas as pd
from datasets import load_from_disk
def main(
global_shared_data_dir,
output_directory
):
"""
Creates gold_submission.csv files with scaled G labels (multiplied by 1000)
"""
# Load the QM9 dataset from the shared data directory
dataset_source_fpath = os.path.join(global_shared_data_dir, 'nimashoghi/qm9/default')
dataset = load_from_disk(dataset_source_fpath)
test = dataset['test']
# Scale G labels by 1000
scaled_labels = [label * 1000 for label in test['G']]
# Save as gold_submission.csv
output_file = os.path.join(output_directory, 'gold_submission.csv')
pd.Series(scaled_labels).to_csv(output_file, index=False, header=['G'])
print(f"Saved scaled G labels to {output_file}")
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Generate gold submission CSV with scaled labels.")
parser.add_argument('--global-shared-data-dir', type=str, required=True, help='Path to the global shared data directory')
parser.add_argument('--output-directory', type=str, required=True, help='Directory to save the output CSV')
args = parser.parse_args()
main(args.global_shared_data_dir, args.output_directory)
| null |
GraphRegressionZincMae | Molecules and Proteins ML | Graph Regression | graphs-datasets/ZINC | MAE | metric_lower_is_better: true
file_export_globs:
- submission.csv
container_python_requirements:
- datasets==4.0.0
evaluate_container_python_requirements:
- datasets==4.0.0
- torchmetrics
- pandas
- numpy
- torch
logging_info:
name: GraphRegressionZincMae
dataset: graphs-datasets/ZINC
category: Molecules and Proteins ML
research_problem: Graph Regression
output_type: Text Generation
config: default
train_split: train
test_split: test
input_columns:
- x
- node_feat
- edge_index
- edge_attr
- num_nodes
scoring_column: y
shape:
- 5000
custom_gold_labels: false
custom_rad_class: false
metric: MAE
additional_metrics: null
sota:
- sota_paper_title: 'An end-to-end attention-based approach for learning on graphs'
sota_paper_url: https://arxiv.org/pdf/2402.10793v2
sota_score: 0.017
sota_year: 2024
sota_venue: Nature Communications
dataset_paper_url: https://pubs.acs.org/doi/10.1021/ci049714+
estimated_worst_score: 9.699924
optimal_score: 0.0
|
# Overview
## Task Description
This is a Machine Learning project and your goal is to complete the project's TASK following the instructions provided below.
TASK: This is a Molecules, proteins etc task to perform Graph Regression on graphs-datasets/ZINC. Your predictions will be scored against the y column of the test set.
## Data
### Dataset Structure
The default config of graphs-datasets/ZINC has the following structure.
Here is a description of the contents of each column including their name, what they contain, and the data type:
``` {'node_feat': List(List(Value('int64'))), 'edge_index': List(List(Value('int64'))), 'edge_attr': List(List(Value('int64'))), 'y': List(Value('float64')), 'num_nodes': Value('int64')} ```
Train data can be loaded using load_from_disk('./data/train') and test data can be loaded using load_from_disk('./data/test').
Note that the scoring column `y` has been removed from the test data.
### Submission file
The submission file should be a csv file named `submission.csv` with the following header:
``` y ```
And it should be of shape (5000,).
### Evalution
The evaluation will be performed on the `submission.csv` file you have submitted using the MAE metric. Here is the evaluation script that will be used:
```
#!/usr/bin/env python3
import argparse, json
import ast
import numpy as np
import pandas as pd
import torch
from datasets import load_dataset, load_from_disk
def load_test_set():
dataset = load_from_disk('./data/test_with_labels')
return np.array(dataset["y"])
def evaluate(predictions, labels):
"""
Compute Mean Absolute Error (MAE) for graph regression on ZINC.
Returns only {"Mae": score}.
"""
# Convert to numeric numpy arrays
clean_predictions = []
for p in predictions:
if isinstance(p, str):
parsed = ast.literal_eval(p) # safely turns "[0.95]" into a Python list [0.95]
if isinstance(parsed, list):
clean_predictions.append(parsed[0]) # take the first element if it's a single-item list
else:
clean_predictions.append(float(parsed))
else:
clean_predictions.append(float(p))
predictions = clean_predictions
y_true = np.asarray(labels, dtype=float)
y_pred = np.asarray(predictions, dtype=float)
# Squeeze trailing singleton dims (e.g., shape (N,1) -> (N,))
if y_pred.ndim > 1 and y_pred.shape[1] == 1:
y_pred = y_pred.squeeze(-1)
if y_true.ndim > 1 and y_true.shape[1] == 1:
y_true = y_true.squeeze(-1)
if y_pred.shape != y_true.shape:
raise ValueError(
f"Shape mismatch: predictions {y_pred.shape} vs labels {y_true.shape}"
)
if not np.all(np.isfinite(y_pred)):
raise ValueError("Predictions contain non-finite values (NaN/Inf).")
if not np.all(np.isfinite(y_true)):
raise ValueError("Labels contain non-finite values (NaN/Inf).")
mae = float(np.mean(np.abs(y_pred - y_true))) if y_true.size > 0 else 0.0
return {"MAE": mae}
def _cli():
p = argparse.ArgumentParser(
description="Evaluate predictions"
)
p.add_argument("--submission-file", default="submission.csv",
help="Path to CSV file containing predictions.")
a = p.parse_args()
print("Loading test set labels...")
labels = load_test_set()
n_test_samples = len(labels)
print(f"Loaded {n_test_samples} labels.")
print(f"Loading predictions from: {a.submission_file}")
try:
# Assuming CSV has no header and contains only prediction values
# Adjust if your submission format is different (e.g., has headers, specific columns)
submission_df = pd.read_csv(a.submission_file, header=0)
preds = submission_df.values.squeeze()
if preds.shape[0] != n_test_samples:
raise ValueError(
f"Submission file row count ({preds.shape[0]}) "
f"does not match test set size ({n_test_samples})."
)
# Further shape validation could be added here against spec['shape_per_item']
# e.g., if preds.shape[1:] != spec['shape_per_item']
except FileNotFoundError:
p.error(f"Submission file not found: {a.submission_file}")
except Exception as e:
p.error(f"Error loading submission_file: {e}")
print("Evaluating predictions...")
result = evaluate(preds, labels)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == '__main__':
_cli()
``` | # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse
import os
import sys
import logging
from datasets import load_dataset, load_from_disk
# Configure logger with custom prefix
logger = logging.getLogger('dataset_code')
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `dataset_code`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Main function for processing data directories.
:param global_shared_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
# Load from the raw data directory
dataset_source_fpath = os.path.join(global_shared_data_dir, 'graphs-datasets/ZINC/default')
dataset = load_from_disk(dataset_source_fpath)
train = dataset['train']
test = dataset['test']
def select_columns(dds, columns):
to_drop = [c for c in dds.column_names if c not in columns]
dds_small = dds.remove_columns(to_drop)
return dds_small
train = select_columns(train, ['x', 'node_feat', 'edge_index', 'edge_attr', 'y', 'num_nodes'])
test = select_columns(test, ['x', 'node_feat', 'edge_index', 'edge_attr', 'num_nodes'])
# Save to the agent data mount directory
train.save_to_disk(os.path.join(agent_data_mount_dir, 'train'))
test.save_to_disk(os.path.join(agent_data_mount_dir, 'test'))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench-raw-data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AirsBench raw data directory: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import os
import sys
import argparse
import logging
import shutil
from datasets import load_from_disk
# Configure logger with custom prefix
SCRIPT_NAME = 'evaluate_prepare'
logger = logging.getLogger(SCRIPT_NAME)
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `SCRIPT_NAME`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Loads test set from airsbench_raw_data_dir into agent_data_mount_dir.
Loads submission.csv from agent_log_dir into agent_data_mount_dir.
:param airsbench_raw_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
# Load test with labels from the raw data directory
dataset_source_fpath = os.path.join(global_shared_data_dir, 'graphs-datasets/ZINC/default')
dataset = load_from_disk(dataset_source_fpath)
test = dataset['test']
test.save_to_disk(os.path.join(agent_data_mount_dir, 'test_with_labels'))
# Load submission.csv from the agent log directory
submission_fpath = os.path.join(agent_log_dir, 'submission.csv')
shutil.copy2(submission_fpath, os.path.join(agent_data_mount_dir, 'submission.csv'))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench-raw-data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AIRSBench global_shared_data_dir: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse, json
import ast
import numpy as np
import pandas as pd
import torch
from datasets import load_dataset, load_from_disk
def load_test_set():
dataset = load_from_disk('./data/test_with_labels')
return np.array(dataset["y"])
def evaluate(predictions, labels):
"""
Compute Mean Absolute Error (MAE) for graph regression on ZINC.
Returns only {"Mae": score}.
"""
# Convert to numeric numpy arrays
clean_predictions = []
for p in predictions:
if isinstance(p, str):
parsed = ast.literal_eval(p) # safely turns "[0.95]" into a Python list [0.95]
if isinstance(parsed, list):
clean_predictions.append(parsed[0]) # take the first element if it's a single-item list
else:
clean_predictions.append(float(parsed))
else:
clean_predictions.append(float(p))
predictions = clean_predictions
y_true = np.asarray(labels, dtype=float)
y_pred = np.asarray(predictions, dtype=float)
# Squeeze trailing singleton dims (e.g., shape (N,1) -> (N,))
if y_pred.ndim > 1 and y_pred.shape[1] == 1:
y_pred = y_pred.squeeze(-1)
if y_true.ndim > 1 and y_true.shape[1] == 1:
y_true = y_true.squeeze(-1)
if y_pred.shape != y_true.shape:
raise ValueError(
f"Shape mismatch: predictions {y_pred.shape} vs labels {y_true.shape}"
)
if not np.all(np.isfinite(y_pred)):
raise ValueError("Predictions contain non-finite values (NaN/Inf).")
if not np.all(np.isfinite(y_true)):
raise ValueError("Labels contain non-finite values (NaN/Inf).")
mae = float(np.mean(np.abs(y_pred - y_true))) if y_true.size > 0 else 0.0
return {"MAE": mae}
def _cli():
p = argparse.ArgumentParser(
description="Evaluate predictions"
)
p.add_argument("--submission-file", default="submission.csv",
help="Path to CSV file containing predictions.")
a = p.parse_args()
print("Loading test set labels...")
labels = load_test_set()
n_test_samples = len(labels)
print(f"Loaded {n_test_samples} labels.")
print(f"Loading predictions from: {a.submission_file}")
try:
# Assuming CSV has no header and contains only prediction values
# Adjust if your submission format is different (e.g., has headers, specific columns)
submission_df = pd.read_csv(a.submission_file, header=0)
preds = submission_df.values.squeeze()
if preds.shape[0] != n_test_samples:
raise ValueError(
f"Submission file row count ({preds.shape[0]}) "
f"does not match test set size ({n_test_samples})."
)
# Further shape validation could be added here against spec['shape_per_item']
# e.g., if preds.shape[1:] != spec['shape_per_item']
except FileNotFoundError:
p.error(f"Submission file not found: {a.submission_file}")
except Exception as e:
p.error(f"Error loading submission_file: {e}")
print("Evaluating predictions...")
result = evaluate(preds, labels)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == '__main__':
_cli()
| null | null |
MathQuestionAnsweringSVAMPAccuracy | Math | Math Question Answering | ChilleD/SVAMP | Accuracy | metric_lower_is_better: false
file_export_globs:
- submission.csv
container_python_requirements:
- datasets==4.0.0
evaluate_container_python_requirements:
- datasets==4.0.0
logging_info:
name: MathQuestionAnsweringSVAMPAccuracy
category: Math
research_problem: Math Question Answering
output_type: text-generation
dataset: ChilleD/SVAMP
metric: Accuracy
input_columns:
- question_concat
scoring_column: Answer
shape: 300,1
config: default
train_split: train
test_split: test
custom_gold_labels: false
custom_rad_class: false
sota:
- sota_paper_title: 'Achieving >97% on GSM8K: Deeply Understanding the Problems Makes LLMs Better Solvers for Math Word Problems'
sota_paper_url: https://arxiv.org/pdf/2404.14963v5
sota_score: 0.942
sota_year: 2026
sota_venue: Frontiers of Computer Science
dataset_paper_url: https://arxiv.org/abs/2103.07191
estimated_worst_score: 0.0
optimal_score: 1.0
| # Overview
## Task Description
This is a Machine Learning project and your goal is to build a model that solves the project's TASK following the instructions provided below.
TASK: Your task is solve math world prolems. Each example presents a short story followed by a specific question. Your task is to read the text and predict the correct numerical answer. Your predictions will be scored against the `Answer` column of the test set.
## Data
### Dataset Structure
The default config of SVAMP has the following structure.
Here is a description of the contents of each column including their name, what they contain, and the data type:
```
{
"ID": string, # Unique identifier for the problem
"Body": string, # The narrative or context text describing the scenario
"Question": string, # The actual math question asked based on the Body
"Equation": string, # The ground-truth equation used to compute the answer
"Answer": string, # The correct numerical solution to the problem (as text)
"Type": string, # The problem category/type. Is one of ["Subtraction", "Addition", "Common-Division", "Multiplication"].
"question_concat": string # Concatenation of Body and Question into one text field
}
```
An example entry in the train dataset contains the following:
- `ID`: "chal-777`
- `Body`: "There are 87 oranges and 290 bananas in Philip's collection. If the bananas are organized into 2 groups and oranges are organized into 93 groups"
- `Question`: "How big is each group of bananas?"
- `Equation`: "( 290.0 / 2.0 )"
- `Answer`: "145"
- `Type`: "Common-Division"
- `question_concat`: "There are 87 oranges and 290 bananas in Philip's collection. If the bananas are organized into 2 groups and oranges are organized into 93 groups How big is each group of bananas?"
> Note: The scoring columns (`Equation`, `Answer`, `Type`) have been removed from the test data. Ensure you preserve the original row order of the test set when producing predictions.
Train data can be loaded using:
```python
from datasets import load_from_disk
train_dataset = load_from_disk('./data/train')
```
Test data can be loaded using:
```python
from datasets import load_from_disk
test_dataset = load_from_disk('./data/test')
```
### Submission file
The submission file should contain the selected answers for the test set. The submission file should be a csv file named `submission.csv` with the following header:
```
Answer
```
And it should be of shape `(300,1)` — one prediction per row, aligned with the test set order.
Each row must contain the numerical answer to the question. Do not include any extra columns or indices.
### Evalution
The evaluation will be performed on the `submission.csv` file you have submitted using the accuracy metric.
Evaluation will be done using the following script:
```py
#!/usr/bin/env python3
import argparse, json
import numpy as np
import pandas as pd
from datasets import load_from_disk
def load_test_answers(path='./data/test_with_labels'):
"""
Load answers for evaluation (string labels).
"""
ds = load_from_disk(path)
if "Answer" not in ds.features:
raise RuntimeError(
"Test data does not contain 'Answer'. "
"Use a labeled copy like './data/test_with_labels'."
)
answers = list(ds["Answer"])
return answers
def evaluate(predictions, labels):
"""
Returns a dict of metric_name -> value
"""
correct = np.fromiter(
(int(p) == int(y) for p, y in zip(predictions, labels)), dtype=bool
)
return {"Accuracy": float(correct.mean())}
def _cli():
p = argparse.ArgumentParser(
description="Evaluate predictions for SVAMP test split using Accuracy."
)
p.add_argument("--submission-file", default="submission.csv",
help="Path to CSV file containing predictions.")
a = p.parse_args()
print("Loading test set labels...")
labels = load_test_answers()
n_test_samples = len(labels)
print(f"Loaded {n_test_samples} labels.")
print(f"Loading predictions from: {a.submission_file}")
try:
# Assuming CSV has no header and contains only prediction values
# Adjust if your submission format is different (e.g., has headers, specific columns)
submission_df = pd.read_csv(a.submission_file, header=0)
preds = submission_df.values.squeeze()
if preds.shape[0] != n_test_samples:
raise ValueError(
f"Submission file row count ({preds.shape[0]}) "
f"does not match test set size ({n_test_samples})."
)
except FileNotFoundError:
p.error(f"Submission file not found: {a.submission_file}")
except Exception as e:
p.error(f"Error loading submission_file: {e}")
print("Evaluating predictions...")
result = evaluate(preds, labels)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == '__main__':
_cli()
``` | # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse
import os
import sys
import logging
from datasets import load_from_disk
# Configure logger with custom prefix
logger = logging.getLogger('dataset_code')
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `dataset_code`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Loads test set from airsbench_raw_data_dir into agent_data_mount_dir.
Loads submission.csv from agent_log_dir into agent_data_mount_dir.
:param airsbench_raw_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
dataset_source_fpath = os.path.join(global_shared_data_dir, 'ChilleD/SVAMP/default')
dataset = load_from_disk(dataset_source_fpath)
train = dataset["train"]
test = dataset["test"]
train.save_to_disk(os.path.join(agent_data_mount_dir, 'train'))
test = test.remove_columns(['Equation', 'Answer', 'Type'])
test.save_to_disk(os.path.join(agent_data_mount_dir, 'test'))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench_raw_data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AirsBench raw data directory: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
from json import load
import os
import sys
import argparse
import logging
import shutil
from datasets import load_from_disk
from yaml import load_all
# Configure logger with custom prefix
SCRIPT_NAME = 'evaluate_prepare'
logger = logging.getLogger(SCRIPT_NAME)
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `SCRIPT_NAME`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Loads test set from airsbench_raw_data_dir into agent_data_mount_dir.
Loads submission.csv from agent_log_dir into agent_data_mount_dir.
:param airsbench_raw_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
dataset_source_fpath = os.path.join(global_shared_data_dir, 'ChilleD/SVAMP/default')
dataset = load_from_disk(dataset_source_fpath)
test = dataset["test"]
test.save_to_disk(os.path.join(agent_data_mount_dir, "test_with_labels"))
submission_fpath = os.path.join(agent_log_dir, "submission.csv")
shutil.copy2(submission_fpath, os.path.join(agent_data_mount_dir, "submission.csv"))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench_raw_data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AIRSBench global_shared_data_dir: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse, json
import numpy as np
import pandas as pd
from datasets import load_from_disk
def load_test_answers(path='./data/test_with_labels'):
"""
Load answers for evaluation (string labels).
"""
ds = load_from_disk(path)
if "Answer" not in ds.features:
raise RuntimeError(
"Test data does not contain 'Answer'. "
"Use a labeled copy like './data/test_with_labels'."
)
answers = list(ds["Answer"])
return answers
def evaluate(predictions, labels):
"""
Returns a dict of metric_name -> value
"""
correct = np.fromiter(
(int(p) == int(y) for p, y in zip(predictions, labels)), dtype=bool
)
return {"Accuracy": float(correct.mean())}
def _cli():
p = argparse.ArgumentParser(
description="Evaluate predictions for SVAMP test split using Accuracy."
)
p.add_argument("--submission-file", default="submission.csv",
help="Path to CSV file containing predictions.")
a = p.parse_args()
print("Loading test set labels...")
labels = load_test_answers()
n_test_samples = len(labels)
print(f"Loaded {n_test_samples} labels.")
print(f"Loading predictions from: {a.submission_file}")
try:
# Assuming CSV has no header and contains only prediction values
# Adjust if your submission format is different (e.g., has headers, specific columns)
submission_df = pd.read_csv(a.submission_file, header=0)
preds = submission_df.values.squeeze()
if preds.shape[0] != n_test_samples:
raise ValueError(
f"Submission file row count ({preds.shape[0]}) "
f"does not match test set size ({n_test_samples})."
)
except FileNotFoundError:
p.error(f"Submission file not found: {a.submission_file}")
except Exception as e:
p.error(f"Error loading submission_file: {e}")
print("Evaluating predictions...")
result = evaluate(preds, labels)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == '__main__':
_cli()
| null | null |
QuestionAnsweringDuoRCAccuracy | Question Answering | Question Answering | ibm-research/duorc | Accuracy | metric_lower_is_better: false
file_export_globs:
- submission.csv
container_python_requirements:
- datasets==4.0.0
evaluate_container_python_requirements:
- datasets==4.0.0
- torchmetrics
- pandas
- numpy
- torch
logging_info:
name: QuestionAnsweringDuoRCAccuracy
category: Question Answering
research_problem: Question Answering
output_type: Text Classification
dataset: ibm-research/duorc
config: ParaphraseRC
metric: Accuracy
input_columns:
- title
- plot
- question
scoring_column: answers
train_split: train
test_split: test
shape: [15857, 2]
custom_gold_labels: true
custom_rad_class: false
sota:
- sota_paper_title: 'Grid Search Hyperparameter Benchmarking of BERT, ALBERT, and LongFormer on DuoRC'
sota_paper_url: https://arxiv.org/pdf/2101.06326
sota_score: 0.4648
sota_notes: Best score achieved in SOTA paper using ALBERT model.
sota_year: 2021
sota_venue: Preprint
dataset_paper_url: https://arxiv.org/abs/1804.07927
estimated_worst_score: 0.0
optimal_score: 1.0
| # Overview
## Task Description
This is a Machine Learning project and your goal is to build a model that solves the project's TASK following the instructions provided below.
TASK: Your task is to answer questions given a large context. You will be provided a title of a story and context surrounding its plot, then will be asked a question for which you should retrieve the answer from the context. Your predictions will be scored against a list of candidate answers provided in the `answers` column of the test.
## Data
### Dataset Structure
The default config of the DuoRC dataset has the following structure.
Here is a description of the contents of each column including their name, what they contain, and the data type:
```
{
"plot_id": string, # The ID of the story
"plot": string, # The plot of the story
"title": string, # The title of the story
"question": string, # A question regarding a detail that may or may not be provided by the story
"answers": list[string], # A list of candidate answers to the question
"no_answer": bool, # A boolean indicating whether the question has no answer in the context.
}
```
An example entry in the train dataset contains the following:
- `plot_id`: "/m/03vyhn"
- `plot`: "Set in the second half of the 22nd century, Mars has been 84% terraformed, allowing humans to walk on the surface without pressure suits. Martian society has become matriarchal, with women in most positions of authority. The story concerns police officer Melanie Ballard (Natasha Henstridge), second in command of a team alongside Sergeant Jericho (Jason Statham) sent to a remote mining outpost to transport prisoner Desolation Williams (Ice Cube). Arriving at the remote mining town, Ballard finds all of the people missing. She learns that they had discovered an underground doorway created by an ancient Martian civilization."
- `title`: "Ghosts of Mars"
- `question`: "Who is colonized by a high tech company?"
- `answers`: ["Humans on Mars", "Mars"]
- `no_answer`: false
> Note: The `answers` and `no_answer` columns are not available in the test set provided. Ensure you preserve the original row order of the test set when producing predictions.
Train data can be loaded using:
```python
from datasets import load_from_disk
train_dataset = load_from_disk('./data/train')
```
Validation data can be loaded using:
```python
from datasets import load_from_disk
test_dataset = load_from_disk('./data/validation')
```
Test data can be loaded using:
```python
from datasets import load_from_disk
test_dataset = load_from_disk('./data/test')
```
### Submission file
The submission file should contain the selected answers for the test set. The submission file should be a csv file named `submission.csv` with the following header:
```
has_answer, answer
```
And it should be of shape `(15857, 2)` — one prediction per row, aligned with the test set order.
The `has_answer` column should produce a `true` or `false` value indicating whether the question has an answer provided in the supporting context.
The `answer` column should contain the answer if the `has_answer` column is true. If `has_answer` is false, any text in the `answer` column is ignored.
Do not include any extra columns or indices.
### Evalution
The evaluation will be performed on the `submission.csv` file you have submitted using the Accuracy metric. For each question, if the submitted `has_answer` is `false`, the prediction is only considered correct if the `no_answer` label is `true`. If the submitted `has_answer` is `true`, the prediction is only considered correct if the `no_answer` label is `false` and the submitted `answer` is an exact match of one of the `answers` in the label.
Evaluation will be done using the following script:
```py
import argparse, json
import numpy as np
import pandas as pd
from datasets import load_from_disk
def load_test_answers(path='./data/test_with_labels'):
"""
Load answers for evaluation (string labels).
"""
ds = load_from_disk(path)
labels = list(ds["answer"])
no_answers = list(ds["no_answers"])
return labels, no_answers
def is_correct_answer(submission, candidate_answers):
submission = submission.lower()
for candidate_answer in candidate_answers:
if candidate_answer.lower() == submission:
return True
return False
def evaluate(submission_answers, submission_has_answers, label_answers, label_no_answers):
"""
Returns a dict of metric_name -> value
"""
correct = 0
total = 0
for submission_answer, submission_has_answer, label_answer, label_no_answer in zip(
submission_answers, submission_has_answers, label_answers, label_no_answers
):
total += 1
if not submission_has_answer:
correct += label_no_answer
else:
correct += is_correct_answer(answer(submission_answer, label_answer))
return {"Accuracy": correct/total}
def _cli():
p = argparse.ArgumentParser(
description="Evaluate predictions for DuoRC test split using Accuracy."
)
p.add_argument("--submission-file", default="submission.csv",
help="Path to CSV file containing predictions.")
a = p.parse_args()
print("Loading test set labels...")
label_answers, label_no_answers = load_test_answers()
n_test_samples = len(label_answers)
print(f"Loaded {n_test_samples} labels.")
print(f"Loading predictions from: {a.submission_file}")
try:
# Assuming CSV has no header and contains only prediction values
# Adjust if your submission format is different (e.g., has headers, specific columns)
submission_df = pd.read_csv(a.submission_file, header=0)
submission_answers = list(submission_df["answer"])
submission_has_answers = list(submission_df["has_answer"])
if len(submission_answers) != n_test_samples:
raise ValueError(
f"Submission file row count ({len(submission_answers)}) "
f"does not match test set size ({n_test_samples})."
)
except FileNotFoundError:
p.error(f"Submission file not found: {a.submission_file}")
except Exception as e:
p.error(f"Error loading submission_file: {e}")
print("Evaluating predictions...")
result = evaluate(submission_answers, submission_has_answers, label_answers, label_no_answers)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == '__main__':
_cli()
```
| # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse
import os
import sys
import logging
from datasets import load_from_disk
# Configure logger with custom prefix
logger = logging.getLogger('dataset_code')
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `dataset_code`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Loads test set from airsbench_raw_data_dir into agent_data_mount_dir.
Loads submission.csv from agent_log_dir into agent_data_mount_dir.
:param airsbench_raw_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
dataset_source_fpath = os.path.join(global_shared_data_dir, 'ibm-research/duorc/ParaphraseRC')
dataset = load_from_disk(dataset_source_fpath)
train = dataset["train"]
validation = dataset["validation"]
test = dataset["test"]
train = train.remove_columns(['question_id'])
validation = validation.remove_columns(['question_id'])
test = test.remove_columns(['question_id', 'answers'])
train.save_to_disk(os.path.join(agent_data_mount_dir, 'train'))
validation.save_to_disk(os.path.join(agent_data_mount_dir, 'validation'))
test.save_to_disk(os.path.join(agent_data_mount_dir, 'test'))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench_raw_data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AirsBench raw data directory: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
from json import load
import os
import sys
import argparse
import logging
import shutil
from datasets import load_from_disk
# Configure logger with custom prefix
SCRIPT_NAME = 'evaluate_prepare'
logger = logging.getLogger(SCRIPT_NAME)
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `SCRIPT_NAME`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Loads test set from airsbench_raw_data_dir into agent_data_mount_dir.
Loads submission.csv from agent_log_dir into agent_data_mount_dir.
:param airsbench_raw_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
dataset_source_fpath = os.path.join(global_shared_data_dir, 'ibm-research/duorc/ParaphraseRC')
dataset = load_from_disk(dataset_source_fpath)
test = dataset["test"]
test.save_to_disk(os.path.join(agent_data_mount_dir, "test_with_labels"))
submission_fpath = os.path.join(agent_log_dir, "submission.csv")
shutil.copy2(submission_fpath, os.path.join(agent_data_mount_dir, "submission.csv"))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench_raw_data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AIRSBench global_shared_data_dir: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse, json
import numpy as np
import pandas as pd
from datasets import load_from_disk
def load_test_answers(path='./data/test_with_labels'):
"""
Load answers for evaluation (string labels).
"""
ds = load_from_disk(path)
labels = list(ds["answers"])
no_answers = list(ds["no_answer"])
return labels, no_answers
def is_correct_answer(submission, candidate_answers):
submission = submission.lower()
for candidate_answer in candidate_answers:
if candidate_answer.lower() == submission:
return True
return False
def evaluate(submission_answers, submission_has_answers, label_answers, label_no_answers):
"""
Returns a dict of metric_name -> value
"""
correct = 0
total = 0
for submission_answer, submission_has_answer, label_answer, label_no_answer in zip(
submission_answers, submission_has_answers, label_answers, label_no_answers
):
total += 1
if not submission_has_answer:
correct += label_no_answer
else:
is_correct = is_correct_answer(submission_answer, label_answer)
correct += is_correct
return {"Accuracy": correct/total}
def _cli():
p = argparse.ArgumentParser(
description="Evaluate predictions for DuoRC test split using Accuracy."
)
p.add_argument("--submission-file", default="submission.csv",
help="Path to CSV file containing predictions.")
a = p.parse_args()
print("Loading test set labels...")
label_answers, label_no_answers = load_test_answers()
n_test_samples = len(label_answers)
print(f"Loaded {n_test_samples} labels.")
print(f"Loading predictions from: {a.submission_file}")
try:
# Assuming CSV has no header and contains only prediction values
# Adjust if your submission format is different (e.g., has headers, specific columns)
submission_df = pd.read_csv(
a.submission_file,
header=0,
keep_default_na=False
)
submission_answers = list(submission_df["answer"])
submission_has_answers = list(submission_df["has_answer"])
if len(submission_answers) != n_test_samples:
raise ValueError(
f"Submission file row count ({len(submission_answers)}) "
f"does not match test set size ({n_test_samples})."
)
except FileNotFoundError:
p.error(f"Submission file not found: {a.submission_file}")
except Exception as e:
p.error(f"Error loading submission_file: {e}")
print("Evaluating predictions...")
result = evaluate(submission_answers, submission_has_answers, label_answers, label_no_answers)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == '__main__':
_cli()
| # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse
import os
import numpy as np
from pathlib import Path
from datasets import load_from_disk
def parse_args():
parser = argparse.ArgumentParser(
description="Make gold labels for the DuoRC dataset for Question Answering."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench-raw-data"
)
parser.add_argument(
"--output-directory",
default=Path(__file__).resolve().parent,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
return parser.parse_args()
def main(args):
dataset_source_fpath = os.path.join(
Path(args.global_shared_data_dir),
'ibm-research/duorc/ParaphraseRC'
)
dataset = load_from_disk(dataset_source_fpath)
test = dataset["test"]
df = test.to_pandas()
df_labels = df[["no_answer", "answers"]].rename(
columns={"no_answer": "has_answer"}
)
df_labels["answers"] = df_labels["answers"].apply(lambda x: "" if len(x) == 0 else x[0])
df_labels["has_answer"] = np.logical_not(df_labels["has_answer"])
save_path = (
Path(args.output_directory).expanduser() /
"gold_submission.csv"
)
df_labels.to_csv(save_path, index=False)
if __name__ == '__main__':
args = parse_args()
main(args)
| null |
QuestionAnsweringEli5Rouge1 | Question Answering | Question Answering | Pavithree/eli5 | Rouge1 | "metric_lower_is_better: false\nfile_export_globs:\n - submission.csv\ncontainer_python_requirement(...TRUNCATED) | "\n# Overview\n## Task Description\nThis is a Machine Learning project and your goal is to complete (...TRUNCATED) | "# Copyright (c) Meta Platforms, Inc. and affiliates.\n# All rights reserved.\n#\n# This source code(...TRUNCATED) | "#!/usr/bin/env python3\n# Copyright (c) Meta Platforms, Inc. and affiliates.\n# All rights reserved(...TRUNCATED) | "#!/usr/bin/env python3\n# Copyright (c) Meta Platforms, Inc. and affiliates.\n# All rights reserved(...TRUNCATED) | "# Copyright (c) Meta Platforms, Inc. and affiliates.\n# All rights reserved.\n#\n# This source code(...TRUNCATED) | null |
AIRS-Bench: a Suite of Tasks for Frontier AI Research Science Agents
The AI Research Science Benchmark (AIRS-Bench) quantifies the autonomous research abilities of LLM agents in the area of machine learning. AIRS-Bench comprises 20 tasks from state-of-the-art machine learning papers spanning diverse domains: NLP, Code, Math, biochemical modelling, and time series forecasting.
Each task is specified by a ⟨problem, dataset, metric⟩ triplet and a SOTA value. The agent receives the full task specification and is expected to develop a solution that generates predictions on a test set, which are then evaluated and compared against the state-of-the-art (SOTA) score from a published paper.
For full details see the paper and the GitHub repository.
Dataset Description
This dataset contains the task specification files for the 20 AIRS-Bench tasks, formatted for use with the aira-dojo agentic harness.
Categories
| Category | # Tasks |
|---|---|
| Text Classification | 2 |
| Question Answering | 4 |
| Text Extraction and Matching | 3 |
| Molecules and Proteins ML | 5 |
| Time Series | 3 |
| Code | 2 |
| Math | 1 |
Data Fields
| Column | Type | Description |
|---|---|---|
task |
string |
Task identifier (directory name, e.g. SentimentAnalysisYelpReviewFullAccuracy) |
category |
string |
High-level domain category (e.g. Text Classification, Code) |
research_problem |
string |
The specific research problem the task addresses |
dataset |
string |
HuggingFace dataset identifier used for the task |
metric |
string |
Evaluation metric (e.g. Accuracy, MeanAbsoluteError, Rouge1) |
metadata.yaml |
string |
Full content of the task metadata file (dataset config, SOTA info, requirements) |
project_description.md |
string |
The task prompt provided to the agent |
prepare.py |
string |
Dataset preparation script (creates train/test splits, hides test labels) |
evaluate_prepare.py |
string |
Evaluation data preparation script (creates test labels for scoring) |
evaluate.py |
string |
Evaluation script used to score the agent's submission |
custom_labels.py |
string |
Optional custom label handler for non-standard label formats (empty if unused) |
utils.py |
string |
Optional shared utilities across task scripts (empty if unused) |
Citation
@article{lupidi2026airsbenchsuitetasksfrontier,
title={AIRS-Bench: a Suite of Tasks for Frontier AI Research Science Agents},
author={Alisia Lupidi and Bhavul Gauri and Thomas Simon Foster and Bassel Al Omari and Despoina Magka and Alberto Pepe and Alexis Audran-Reiss and Muna Aghamelu and Nicolas Baldwin and Lucia Cipolina-Kun and Jean-Christophe Gagnon-Audet and Chee Hau Leow and Sandra Lefdal and Hossam Mossalam and Abhinav Moudgil and Saba Nazir and Emanuel Tewolde and Isabel Urrego and Jordi Armengol Estape and Amar Budhiraja and Gaurav Chaurasia and Abhishek Charnalia and Derek Dunfield and Karen Hambardzumyan and Daniel Izcovich and Martin Josifoski and Ishita Mediratta and Kelvin Niu and Parth Pathak and Michael Shvartsman and Edan Toledo and Anton Protopopov and Roberta Raileanu and Alexander Miller and Tatiana Shavrina and Jakob Foerster and Yoram Bachrach},
year={2026},
eprint={2602.06855},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2602.06855},
}
License
This dataset is released under the CC BY-NC 4.0 license.
- Downloads last month
- 1