The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 87, in _split_generators
pa.Table.from_pylist(cast_to_python_objects([example], only_1d_for_numpy=True))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 487, in cast_to_python_objects
return _cast_to_python_objects(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 437, in _cast_to_python_objects
casted_first_elmt, has_changed_first_elmt = _cast_to_python_objects(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 416, in _cast_to_python_objects
casted_v, has_changed_v = _cast_to_python_objects(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 415, in _cast_to_python_objects
for k, v in obj.items():
^^^^^^^^^^^
File "<frozen _collections_abc>", line 894, in __iter__
File "/usr/local/lib/python3.12/site-packages/numpy/lib/_npyio_impl.py", line 257, in __getitem__
return format.read_array(
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/numpy/lib/_format_impl.py", line 833, in read_array
raise ValueError("Object arrays cannot be loaded when "
ValueError: Object arrays cannot be loaded when allow_pickle=False
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Paper in the making
AD-Trajectories Dataset
This dataset was created for the Master's thesis "From Broadcast to 3D: A Deep Learning Approach for Tennis Trajectory and Spin Estimation" by Alexandra Göppert at the University Augsburg, Chair of Machine Learning and Computer Vision. The AD-Rallies dataset is a large-scale synthetic dataset generated using the MuJoCo physics engine. It was built to bridge the synthetic-to-real gap by providing highly accurate physical models of aerodynamic forces, such as the Magnus effect, and complex ball-court interactions.
Dataset Overview
The dataset comprises of approxemately 3.2 Million synthetic tennis rallies. The rallies start with a ball toss, a serve and then up to 4 further basic strokes like (groundstroke, volley, lob, short and smash) can be added. All physical kinematics, including the 3D positions, linear velocities, and angular velocities (spin), are captured at a high resolution of 500 frames per second (fps), corresponding to a time step of 0.002 seconds.
The dataset is saved as a .tar file becuase it compromises out of 3.2 Million .npz files. The tar file has a size of 87 GB. The npz file includes the position, velocity and angular velocity of the ball through the whole rally.
The name if the npz files is like follows: toss_xxxxx_branch_yyy.npz or toss_xxxxx_branch_yyy_deadend.npz
Where xxxxx is the number of one of the 20000 tosses that were initially simulated and start the rally. Combined with a server this creates the stem for the rallies. For each stem rally upto 4 returns are added. These are numbered in increasing order as branch_yyy. The number of max rallies emerging out of one toss-serve combination is 160. If there is no feasable return found the rally is no longer stritched together (even though it has less than a total of 6 shots in the rally). This rally is then marked with "_deadend".
Data Structures per Trajectory
Inside each npz, you will find exactly seven .npy files. These numpy arrays store the spatial, temporal, and camera data for that specific sequence:
positions.npy: The 3D position of the ball (x, y, z) throughout the rally, recorded at a resolution of 0.002s.
velocities.npy: The linear velocity of the ball relative to the world coordinate system, recorded at a resolution of 0.002s.
rotations.npy: The angular velocity (spin) of the ball in all 3 directions, recorded at a resolution of 0.002s.
The position and velocity is defined in relation to the 3D world coordinate system, which is defined like follows:

The ball spin (rotations.npy) is defined in relation to the ball's local coordinate system. The direction of which is defined as follows:

Download the Dataset
You can download the specific tar file using the hf_hub_download function. This is more efficient than cloning the entire repository if you only need the archive. Python
from huggingface_hub import hf_hub_download
REPO_ID = "XSpaceCoderX/AD-Rallies"
FILENAME = "data.tar"
print(f"Downloading {FILENAME}...")
local_path = hf_hub_download(
repo_id=REPO_ID,
filename=FILENAME,
repo_type="dataset"
)
print(f"File downloaded to: {local_path}")
Unpack the Dataset
Once downloaded, you can extract the contents using Python's built-in tarfile module or a system command. Option A: Using Python (Cross-platform)
This is the recommended way to ensure compatibility across Windows, macOS, and Linux.
import tarfile
import os
def extract_tar(file_path, extract_path="."):
print(f"Extracting {file_path}...")
with tarfile.open(file_path, "r") as tar:
tar.extractall(path=extract_path)
print("Extraction complete!")
Note: Make sure you have at least 180GB of free disk space (87 GB for the archive + 100 GB for the extracted contents).
- Downloads last month
- 33