condition stringclasses 4
values | condition_description stringclasses 4
values | sequence_index int64 0 9 | text stringlengths 14.7k 23.5k | n_tokens int64 4.1k 4.1k | n_chars int64 14.7k 23.5k | n_documents int64 1 11 | source_files stringclasses 3
values |
|---|---|---|---|---|---|---|---|
baseline | 100% web data throughout (Phase 0 = all web) | 0 | |Viewing Single Post From: Spoilers for the Week of February 11th|
|Lil||Feb 1 2013, 09:58 AM|
Don't care about Chloe/Taniel/Jen-Jen. Don't care about Sami, really, but hoping that we get some good "SAMANTHA GENE!!" Marlena Death-Stares out of it. And "newfound" feelings. Please. If only.
STEFANO!! STEFANO, STEFANO, ST... | 4,096 | 18,372 | 9 | fineweb_00.npy, fineweb_01.npy, fineweb_02.npy |
baseline | 100% web data throughout (Phase 0 = all web) | 1 | traditional), Korean, Spanish, Thai, and Vietnamese. If you have any questions about your employee health policy, please contact your area inspector or contact the Health Department at 703-246-2444, TTY 703-591-6435.
|2-301.15||Corrected During Inspection Two food employees were observed cleaning their hands in three c... | 4,096 | 18,400 | 11 | fineweb_00.npy, fineweb_01.npy, fineweb_02.npy |
baseline | 100% web data throughout (Phase 0 = all web) | 2 | would be a better predictor as not all 3P units are shipped by Amazon, but the total paid units had a better R2, so I stuck with that.
What does the Statistical Analysis Tell Us?
Here is the key takeaway: Every 2% increase in gasoline price reduces Amazon's overall gross margin by 10 basis points (BPS). At $16 billion... | 4,096 | 18,441 | 11 | fineweb_00.npy, fineweb_01.npy, fineweb_02.npy |
baseline | 100% web data throughout (Phase 0 = all web) | 3 | . The panhandle will again be excluded, as will the far southeastern counties. For a map of Oklahoma's antlerless deer hunt zones and to see which counties will be open for the special season, hunters should refer to page 14 of the "2006-07 Oklahoma Hunting Guide."
To participate in the special antlerless deer season, ... | 4,096 | 18,453 | 11 | fineweb_00.npy, fineweb_01.npy, fineweb_02.npy |
baseline | 100% web data throughout (Phase 0 = all web) | 4 | money? Why eBay’s controversial study doesn’t matter that much
Koozai (March 20, 2013) - Over 100 Game Changing PPC Strategies From 12 Experts
Small Biz Trends (March 19, 2013) - 5 Lessons You Can Learn from eBay’s AdWords Disaster
Seer Interactive (March 18, 2013) - Looking for a Good Response to eBay’s Paid Search O... | 4,096 | 14,875 | 1 | fineweb_00.npy, fineweb_01.npy, fineweb_02.npy |
baseline | 100% web data throughout (Phase 0 = all web) | 5 | 2012) - Improving Your Keyword Analysis With WordStream
SEOmoz (Sept 26, 2012) - Using AdWords Data for SEO: Unlocking the Ultimate Keyword Research Treasure Trove (Arrrgh!!)
CIO (Sept. 11, 2012) - 15 LinkedIn Tips to Improve Your Job Search
100k Blueprint (Aug. 26, 2012) - The Death of SEO
Shopatron (Aug. 23, 2012) -... | 4,096 | 14,657 | 1 | fineweb_00.npy, fineweb_01.npy, fineweb_02.npy |
baseline | 100% web data throughout (Phase 0 = all web) | 6 | Total US newspaper industry's revenue less than Google's alone
Intuit Small Business Blog (Feb. 29, 2012) - Should Your Small Business Buy Pay-Per-Click Advertising?
Search Engine Watch (Jan. 30, 2012) - AdWords Performance Grader Tool Touts More Accurate PPC Data Reports
CNET (Jan 24, 2012) - Google's biggest AdWords... | 4,096 | 16,632 | 11 | fineweb_00.npy, fineweb_01.npy, fineweb_02.npy |
baseline | 100% web data throughout (Phase 0 = all web) | 7 | 54.055532~-37.330167&scene=-1&phx=0&phy=0&phscl=1&encType=1
• Deer Cave, Malaysia
The Ice Hotel!
[Insert large cavern system here] preferably shot both as a walk-through (and back!) but also as panoramics in areas of interest so we get lots of donuts.
Another one high on my list would be an aerial synth of the Grand Ca... | 4,096 | 18,381 | 5 | fineweb_00.npy, fineweb_01.npy, fineweb_02.npy |
baseline | 100% web data throughout (Phase 0 = all web) | 8 | manager, said Friday afternoon that she also was disappointed with the results. She noted that compliance checks that were conducted in January revealed an almost 50 percent failure rate. That rate is slightly lower this time.
“I wish there was a magic button because I would push it,” she said. “We will continue to of... | 4,096 | 18,647 | 9 | fineweb_00.npy, fineweb_01.npy, fineweb_02.npy |
baseline | 100% web data throughout (Phase 0 = all web) | 9 | Subject Marker" version, or the "But" version?
All I can tell from the two examples given in the Particles page is that the "Subject Marker" version comes after a noun, and that the "But" version always comes after the "Subject Marker" version, and sometimes at the end of the sentence, as a sort of cliffhanger.
Can any... | 4,096 | 18,590 | 11 | fineweb_00.npy, fineweb_01.npy, fineweb_02.npy |
End of preview. Expand in Data Studio
prepretraining-training-samples-v1
First 10 training sequences (4096 tokens each) for each of the 4 conditions, decoded back to human-readable text. Shows exactly what the model sees during training. Use this to verify data quality, ordering, and condition differentiation.
Dataset Info
- Rows: 40
- Columns: 8
Columns
| Column | Type | Description |
|---|---|---|
| condition | Value('string') | Data scheduling condition: baseline, front-load, constant-mix, or anneal |
| condition_description | Value('string') | What this condition feeds the model during its first phase |
| sequence_index | Value('int64') | Index of this sequence within the condition (0 = very first thing the model sees) |
| text | Value('string') | Decoded text of the 4096-token training sequence (human-readable) |
| n_tokens | Value('int64') | Number of tokens in this sequence (always 4096) |
| n_chars | Value('int64') | Character count of the decoded text |
| n_documents | Value('int64') | Number of documents packed into this sequence (separated by EOS tokens) |
| source_files | Value('string') | Source .npy file names (first 3) |
Generation Parameters
{
"script_name": "analysis/sample_training_batches.py",
"model": "N/A (data inspection, not model output)",
"description": "First 10 training sequences (4096 tokens each) for each of the 4 conditions, decoded back to human-readable text. Shows exactly what the model sees during training. Use this to verify data quality, ordering, and condition differentiation.",
"hyperparameters": {
"sequence_length": 4096,
"n_sequences_per_condition": 10,
"tokenizer": "allenai/gpt-neox-olmo-dolma-v1_5"
},
"input_datasets": [
"reasoning-degeneration-dev/prepretraining-gold-v1",
"reasoning-degeneration-dev/prepretraining-web-v1"
],
"experiment_id": "prepretraining",
"artifact_type": "input_data",
"visualizer_type": "table",
"artifact_group": "data-inspection"
}
Experiment Documentation
For complete experiment details, see https://github.com/Zayne-sprague/SC-Research-Notes/tree/main/experiments/prepretraining
Usage
from datasets import load_dataset
dataset = load_dataset("reasoning-degeneration-dev/prepretraining-training-samples-v1", split="train")
print(f"Loaded {len(dataset)} rows")
This dataset is tracked in reasoning-degeneration-dev/PROJECT-MANIFEST
- Downloads last month
- 9