Datasets:
The dataset viewer is not available for this dataset.
Error code: ConfigNamesError
Exception: ValueError
Message:
Expected data_files in YAML to be either a string or a list of strings
or a list of dicts with two keys: 'split' and 'path', but got [{'split': 'Amiri', 'path': 'Amiri/*.csv'}, {'split': 'Sakkal_Majalla', 'path': 'Sakkal_Majalla/*.csv'}, {'split': 'Arial', 'path': 'Arial/*.csv'}, {'split': 'Calibri', 'path': 'Calibri/*.csv'}, {'split': 'Scheherazade_New', 'path': 'Scheherazade_New/*.csv'}, {'split': 'traditional-arabic-regular', 'path': 'traditional-arabic-regular/*.csv'}]
Examples of data_files in YAML:
data_files: data.csv
data_files: data/*.png
data_files:
- part0/*
- part1/*
data_files:
- split: train
path: train/*
- split: test
path: test/*
data_files:
- split: train
path:
- train/part1/*
- train/part2/*
- split: test
path: test/*
PS: some symbols like dashes '-' are not allowed in split names
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
config_names = get_dataset_config_names(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
dataset_module = dataset_module_factory(
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1207, in dataset_module_factory
raise e1 from None
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1182, in dataset_module_factory
).get_module()
^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 611, in get_module
metadata_configs = MetadataConfigs.from_dataset_card_data(dataset_card_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/utils/metadata.py", line 153, in from_dataset_card_data
cls._raise_if_data_files_field_not_valid(metadata_config)
File "/usr/local/lib/python3.12/site-packages/datasets/utils/metadata.py", line 100, in _raise_if_data_files_field_not_valid
raise ValueError(yaml_error_message)
ValueError:
Expected data_files in YAML to be either a string or a list of strings
or a list of dicts with two keys: 'split' and 'path', but got [{'split': 'Amiri', 'path': 'Amiri/*.csv'}, {'split': 'Sakkal_Majalla', 'path': 'Sakkal_Majalla/*.csv'}, {'split': 'Arial', 'path': 'Arial/*.csv'}, {'split': 'Calibri', 'path': 'Calibri/*.csv'}, {'split': 'Scheherazade_New', 'path': 'Scheherazade_New/*.csv'}, {'split': 'traditional-arabic-regular', 'path': 'traditional-arabic-regular/*.csv'}]
Examples of data_files in YAML:
data_files: data.csv
data_files: data/*.png
data_files:
- part0/*
- part1/*
data_files:
- split: train
path: train/*
- split: test
path: test/*
data_files:
- split: train
path:
- train/part1/*
- train/part2/*
- split: test
path: test/*
PS: some symbols like dashes '-' are not allowed in split namesNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
SARD: Synthetic Arabic Recognition Dataset
Overview
SARD (Synthetic Arabic Recognition Dataset) is a large-scale, synthetically generated dataset designed for training and evaluating Optical Character Recognition (OCR) models for Arabic text. This dataset addresses the critical need for comprehensive Arabic text recognition resources by providing controlled, diverse, and scalable training data that simulates real-world book layouts.
Key Features
- Massive Scale: 2,515,082 document images containing 794.6 million words
- Typographic Diversity: Five distinct Arabic fonts (Amiri, Sakkal Majalla, Arial, Calibri, Scheherazade New and Traditional Arabic)
- Structured Formatting: Designed to mimic real-world book layouts with consistent typography
- Clean Data: Synthetically generated with no scanning artifacts, blur, or distortions
- Content Diversity: Text spans multiple domains including culture, literature, Shariah, social topics, and more
Dataset Structure
The dataset is divided into five splits based on font name:
- Amiri: ~488585 document images
- Sakkal Majalla: ~488585 document images
- Arial: ~321683 document images
- Calibri: ~321683 document images
- Scheherazade New: ~572863 document images
- Traditional Arabic: ~321683 document images
📋 Sample Images




Each split contains data specific to a single font with the following attributes:
image_name: Unique identifier for each imagechunk: The text content associated with the imagefont_name: The font used in text renderingimage_base64: Base64-encoded image representationsample_id: Unique IDarticle_link: Link of the source article
Content Distribution
| Category | Number of Articles |
|---|---|
| Culture | 13,253 |
| Fatawa & Counsels | 8,096 |
| Literature & Language | 11,581 |
| Bibliography | 26,393 |
| Publications & Competitions | 1,123 |
| Shariah | 46,665 |
| Social | 8,827 |
| Translations | 443 |
| Muslim's News | 16,725 |
| Total Articles | 133,105 |
Font Specifications
| Font | Words Per Page | Font Size |
|---|---|---|
| Sakkal Majalla | 50–300 | 14 pt |
| Arial | 50–500 | 12 pt |
| Calibri | 50–500 | 12 pt |
| Amiri | 50–300 | 12 pt |
| Scheherazade | 50–250 | 12 pt |
| Traditional Arabic | 50–500 | 12 pt |
Page Layout
| Specification | Measurement |
|---|---|
| Left Margin | 0.9 inches |
| Right Margin | 0.9 inches |
| Top Margin | 1.0 inch |
| Bottom Margin | 1.0 inch |
| Gutter Margin | 0.2 inches |
| Page Width | 8.27 inches (A4) |
| Page Height | 11.69 inches (A4) |
Usage Example
from datasets import load_dataset
import base64
from io import BytesIO
from PIL import Image
import matplotlib.pyplot as plt
# Load dataset with streaming enabled
ds = load_dataset("riotu-lab/SARD", streaming=True)
print(ds)
# Iterate over a specific font dataset (e.g., Amiri)
for sample in ds["Amiri"]:
image_name = sample["image_name"]
chunk = sample["chunk"] # Arabic text transcription
font_name = sample["font_name"]
# Decode Base64 image
image_data = base64.b64decode(sample["image_base64"])
image = Image.open(BytesIO(image_data))
# Display the image
plt.figure(figsize=(10, 10))
plt.imshow(image)
plt.axis('off')
plt.title(f"Font: {font_name}")
plt.show()
# Print the details
print(f"Image Name: {image_name}")
print(f"Font Name: {font_name}")
print(f"Text Chunk: {chunk}")
# Break after one sample for testing
break
Applications
SAND is designed to support various Arabic text recognition tasks:
- Training and evaluating OCR models for Arabic text
- Developing vision-language models for document understanding
- Fine-tuning existing OCR models for better Arabic script recognition
- Benchmarking OCR performance across different fonts and layouts
- Research in Arabic natural language processing and computer vision
Acknowledgments
The authors thank Prince Sultan University for their support in developing this dataset.
Citation
If you use SARD in your work, please cite the following paper:
APA:
Nacar, O., Al-Habashi, Y., Sibaee, S., Ammar, A., & Boulila, W. (2025). SARD: A Large-Scale Synthetic Arabic OCR Dataset for Book-Style Text Recognition. arXiv preprint arXiv:2505.24600.
https://arxiv.org/abs/2505.24600
BibTeX:
@misc{nacar2025sard,
title={SARD: A Large-Scale Synthetic Arabic OCR Dataset for Book-Style Text Recognition},
author={Omer Nacar and Yasser Al-Habashi and Serry Sibaee and Adel Ammar and Wadii Boulila},
year={2025},
eprint={2505.24600},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2505.24600},
}
- Downloads last month
- 6,665