Datasets:
The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 242, in _generate_tables
pa_table = paj.read_json(
^^^^^^^^^^^^^^
File "pyarrow/_json.pyx", line 342, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Column() changed from object to string in row 0
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 97, in _split_generators
pa_table = next(iter(self._generate_tables(**splits[0].gen_kwargs, allow_full_read=False)))[1]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 256, in _generate_tables
batch = json_encode_fields_in_json_lines(original_batch, json_field_paths)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/utils/json.py", line 106, in json_encode_fields_in_json_lines
examples = [ujson_loads(line) for line in original_batch.splitlines()]
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/utils/json.py", line 20, in ujson_loads
return pd.io.json.ujson_loads(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: Expected object or value
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
ERP-GPT-EU
This repository includes the data and code needed to deploy ERP-GPT-EU, a tool-augmented GPT for querying and interpreting high-resolution European soil data.
ERP-GPT-EU combines geographic resolution based on European GADM administrative regions with soil data access from the LUCAS-MEGA fused dataset. It can resolve user-specified places or coordinates, retrieve relevant soil properties, and generate soil data outputs through an API server. The GPT is designed for users such as soil scientists, agronomists, land managers, policymakers, and data analysts.
The repository contains both the backend API server and the GPT configuration resources. The API server provides access to local soil and geographic data files, while the gpt_resources directory contains the prompt, knowledge file, OpenAPI schema, icon, and starting questions needed to configure the custom GPT.
Deployment
API Server
Install the required Python packages:
pip install -r requirements.txt
Start the API server:
python app.py --port=<PORT>
For local debugging, you can expose the server through a temporary public tunnel. For example, using localtunnel:
npx localtunnel --port <PORT>
This will return a public URL such as:
https://your-subdomain.loca.lt
Use this public URL when configuring the GPT action schema.
GPT Setup
Go to ChatGPT and create a custom GPT.
Use the files in gpt_resources/ to set up the GPT:
gpt_resources/
├── icon.png
├── knowledge.md
├── prompt.md
├── schema.json
└── starts.md
File descriptions:
schema.json(most important): the OpenAPI schema for GPT Actions. In the OpenAI custom GPT setup, this file defines the API endpoints that the GPT can call, including input parameters, response formats, and the public server URL.prompt.md(most important): the top-level instructions for the GPT. In the OpenAI custom GPT setup, this content should be placed in the Instructions field. It defines the GPT's role, behavior, tool-use rules, response style, and constraints.knowledge.md: the knowledge file uploaded to the GPT. In the OpenAI custom GPT setup, this file provides reference material that the GPT can search when answering questions. It should describe the dataset, available soil variables, geographic scope, terminology, and important usage notes.icon.png: icon for the GPT.starts.md: starting questions for users.
Before uploading or pasting schema.json, replace the default server URL:
{
"servers": [
{
"url": "https://api.erp-soilgpt.uk/"
}
]
}
with your own public API server address, for example:
{
"servers": [
{
"url": "https://your-subdomain.loca.lt/"
}
]
}
After saving the GPT configuration, test it with a simple soil or location query to confirm that the GPT can call your API server successfully.
- Downloads last month
- 10