Publish 3 shards CC-MAIN-2026-12/61407–61409
Browse files- README.md +10 -10
- data/CC-MAIN-2026-12/06/14/061407.parquet +3 -0
- data/CC-MAIN-2026-12/06/14/061408.parquet +3 -0
- data/CC-MAIN-2026-12/06/14/061409.parquet +3 -0
- stats.csv +6 -3
README.md
CHANGED
|
@@ -32,15 +32,15 @@ configs:
|
|
| 32 |
|
| 33 |
**Open Markdown** is a large-scale web text dataset built from [Common Crawl](https://commoncrawl.org). Common Crawl is a non-profit that crawls the web and freely provides its archives and datasets to the public — see [their latest crawl announcement](https://commoncrawl.org/blog/march-2026-crawl-archive-now-available) for details on the source data. Every page goes through a pipeline that extracts the main content from raw HTML, converts it to clean Markdown, and packages the result into Parquet files with useful WARC metadata for traceability.
|
| 34 |
|
| 35 |
-
The dataset currently includes crawl **CC-MAIN-2026-12** with **649,
|
| 36 |
|
| 37 |
### Live Progress
|
| 38 |
|
| 39 |
-
Processing at **69.3 shards/hour** — 37,
|
| 40 |
|
| 41 |
Estimated completion: **May 26, 2026** (38 days)
|
| 42 |
|
| 43 |
-
**Current server:** 6 CPU cores, 12 GB RAM (3.
|
| 44 |
|
| 45 |
**Memory per session:** avg 575 MB, peak 799 MB (measured via VmRSS)
|
| 46 |
|
|
@@ -182,9 +182,9 @@ No intermediate files are created — the pipeline streams from compressed WARC
|
|
| 182 |
|
| 183 |
### Compression Ratios
|
| 184 |
|
| 185 |
-
Numbers below are actual measurements summed across all
|
| 186 |
|
| 187 |
-
| Stage |
|
| 188 |
|---|---|---|---|
|
| 189 |
| Raw WARC (.warc.gz, downloaded) | ~29.8 TB | ~79.2 TB | — |
|
| 190 |
| HTML extracted (uncompressed) | 84.2 TB | ~224.0 TB | — |
|
|
@@ -193,16 +193,16 @@ Numbers below are actual measurements summed across all 37601 files of CC-MAIN-2
|
|
| 193 |
|
| 194 |
The big win is HTML → Markdown conversion: the tokenizer strips all tags, scripts, styles, navigation, and ads, keeping only the main content. This cuts 84.2 TB of uncompressed HTML down to 5.4 TB of markdown — a **93.6% reduction**. Parquet with Zstd then compresses the markdown a further 68.9%.
|
| 195 |
|
| 196 |
-
End to end: ~29.8 TB of raw gzipped WARCs becomes **1.7 TB of Parquet** — a **94.3% total reduction** — containing 649,
|
| 197 |
|
| 198 |
### Processing Times
|
| 199 |
|
| 200 |
-
Pipeline timings across
|
| 201 |
|
| 202 |
```
|
| 203 |
-
Download (raw WARC) ████████████░░░░░░░░░░░░ 170h
|
| 204 |
-
Convert (HTML → Markdown → Parquet) ████████████████████████ 336h
|
| 205 |
-
Publish (HuggingFace) ███████░░░░░░░░░░░░░░░░░ 107h
|
| 206 |
```
|
| 207 |
|
| 208 |
### Dataset Charts
|
|
|
|
| 32 |
|
| 33 |
**Open Markdown** is a large-scale web text dataset built from [Common Crawl](https://commoncrawl.org). Common Crawl is a non-profit that crawls the web and freely provides its archives and datasets to the public — see [their latest crawl announcement](https://commoncrawl.org/blog/march-2026-crawl-archive-now-available) for details on the source data. Every page goes through a pipeline that extracts the main content from raw HTML, converts it to clean Markdown, and packages the result into Parquet files with useful WARC metadata for traceability.
|
| 34 |
|
| 35 |
+
The dataset currently includes crawl **CC-MAIN-2026-12** with **649,979,467 documents across 37604 shards**. Processed 84.2 TB of raw HTML into 5.4 TB of clean Markdown — a **93.6% reduction**. We plan to add more snapshots over time.
|
| 36 |
|
| 37 |
### Live Progress
|
| 38 |
|
| 39 |
+
Processing at **69.3 shards/hour** — 37,604 of 100,000 done (**37.60%**)
|
| 40 |
|
| 41 |
Estimated completion: **May 26, 2026** (38 days)
|
| 42 |
|
| 43 |
+
**Current server:** 6 CPU cores, 12 GB RAM (3.9 GB available), 44 GB disk free
|
| 44 |
|
| 45 |
**Memory per session:** avg 575 MB, peak 799 MB (measured via VmRSS)
|
| 46 |
|
|
|
|
| 182 |
|
| 183 |
### Compression Ratios
|
| 184 |
|
| 185 |
+
Numbers below are actual measurements summed across all 37604 files of CC-MAIN-2026-12 (649,979,467 pages total), projected to the full crawl of 100,000 WARC files.
|
| 186 |
|
| 187 |
+
| Stage | 37604 files (measured) | 100,000 files (projected) | Reduction |
|
| 188 |
|---|---|---|---|
|
| 189 |
| Raw WARC (.warc.gz, downloaded) | ~29.8 TB | ~79.2 TB | — |
|
| 190 |
| HTML extracted (uncompressed) | 84.2 TB | ~224.0 TB | — |
|
|
|
|
| 193 |
|
| 194 |
The big win is HTML → Markdown conversion: the tokenizer strips all tags, scripts, styles, navigation, and ads, keeping only the main content. This cuts 84.2 TB of uncompressed HTML down to 5.4 TB of markdown — a **93.6% reduction**. Parquet with Zstd then compresses the markdown a further 68.9%.
|
| 195 |
|
| 196 |
+
End to end: ~29.8 TB of raw gzipped WARCs becomes **1.7 TB of Parquet** — a **94.3% total reduction** — containing 649,979,467 clean markdown documents.
|
| 197 |
|
| 198 |
### Processing Times
|
| 199 |
|
| 200 |
+
Pipeline timings across 37604 shards of CC-MAIN-2026-12:
|
| 201 |
|
| 202 |
```
|
| 203 |
+
Download (raw WARC) ████████████░░░░░░░░░░░░ 170h 42m 41s
|
| 204 |
+
Convert (HTML → Markdown → Parquet) ████████████████████████ 336h 23m 36s
|
| 205 |
+
Publish (HuggingFace) ███████░░░░░░░░░░░░░░░░░ 107h 4m 36s
|
| 206 |
```
|
| 207 |
|
| 208 |
### Dataset Charts
|
data/CC-MAIN-2026-12/06/14/061407.parquet
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a193909fac1da4fafc334ddd15ea4362e987cdb8e1ea50a9e652f101a68ca8e7
|
| 3 |
+
size 50459699
|
data/CC-MAIN-2026-12/06/14/061408.parquet
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0215740495a7c5ef45a1035d043b1a2182f1c241f9858779be0c7c2467a259ea
|
| 3 |
+
size 48233818
|
data/CC-MAIN-2026-12/06/14/061409.parquet
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:43ea9e4f0e9b19b50e80b9499d49032cb03a782b2772c3e8bf17f02271fccb1b
|
| 3 |
+
size 47280560
|
stats.csv
CHANGED
|
@@ -25633,6 +25633,9 @@ CC-MAIN-2026-12,61400,17004,2409195306,148845877,45828048,2026-04-18T14:39:32Z,2
|
|
| 25633 |
CC-MAIN-2026-12,61401,17200,2445003244,157808446,48558380,2026-04-18T14:41:41Z,25,40,0,12,577
|
| 25634 |
CC-MAIN-2026-12,61402,17015,2415049544,148191835,46567742,2026-04-18T14:41:41Z,23,41,0,12,599
|
| 25635 |
CC-MAIN-2026-12,61403,17031,2416905731,153556391,46642961,2026-04-18T14:41:41Z,23,38,0,12,591
|
| 25636 |
-
CC-MAIN-2026-12,61404,17343,2464318370,159219072,50304763,2026-04-18T14:43:51Z,23,36,0,
|
| 25637 |
-
CC-MAIN-2026-12,61405,17191,2443906488,152915535,48890997,2026-04-18T14:43:51Z,23,40,0,
|
| 25638 |
-
CC-MAIN-2026-12,61406,17225,2402785025,156273576,48050290,2026-04-18T14:43:51Z,25,43,0,
|
|
|
|
|
|
|
|
|
|
|
|
| 25633 |
CC-MAIN-2026-12,61401,17200,2445003244,157808446,48558380,2026-04-18T14:41:41Z,25,40,0,12,577
|
| 25634 |
CC-MAIN-2026-12,61402,17015,2415049544,148191835,46567742,2026-04-18T14:41:41Z,23,41,0,12,599
|
| 25635 |
CC-MAIN-2026-12,61403,17031,2416905731,153556391,46642961,2026-04-18T14:41:41Z,23,38,0,12,591
|
| 25636 |
+
CC-MAIN-2026-12,61404,17343,2464318370,159219072,50304763,2026-04-18T14:43:51Z,23,36,0,14,535
|
| 25637 |
+
CC-MAIN-2026-12,61405,17191,2443906488,152915535,48890997,2026-04-18T14:43:51Z,23,40,0,14,587
|
| 25638 |
+
CC-MAIN-2026-12,61406,17225,2402785025,156273576,48050290,2026-04-18T14:43:51Z,25,43,0,14,592
|
| 25639 |
+
CC-MAIN-2026-12,61407,17277,2492981150,163749026,50459699,2026-04-18T14:46:11Z,33,46,0,0,601
|
| 25640 |
+
CC-MAIN-2026-12,61408,17335,2436895586,151875133,48233818,2026-04-18T14:46:11Z,27,49,0,0,600
|
| 25641 |
+
CC-MAIN-2026-12,61409,17391,2448566554,151440978,47280560,2026-04-18T14:46:11Z,29,48,0,0,606
|