This is a collection of 64 language models, each with approximately 1B parameters, trained on different random mixtures of data. This project aims to validate the generalization capabilities of the RegMix approach (https://huggingface.co/papers/2407.01492) from small-scale (e.g., 1M parameters) to large-scale (e.g., 1B parameters) models.
Key Features
Model Size: 64 separate models, each with ~1B parameters
Training Data: Random data mixtures on the RegMix-Data dataset
Purpose: To validate the effectiveness of RegMix on identifying high-performing data mixture
Dataset
The models were trained using the RegMix-Data dataset, which is split into different domains from The Pile dataset.
Training Hyperparameters
Hyperparameter
Value
Batch Size
1M tokens
Learning Rate
4e-4
Minimum Learning Rate
1e-5
Learning Rate Schedule
Cosine
Warmup Ratio
4%
Total Tokens
25B
How to Load a Model
You can load any model using the corresponding branch with the Hugging Face Transformers library:
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("sail/data-mixture-random-1b", revision="model-index-1")
tokenizer = AutoTokenizer.from_pretrained("sail/data-mixture-random-1b", revision="model-index-1")
Data Mixture
The specific data mixture used for training each 1B model can be found in the file train_config.yaml in each corresponding model branch.
Model Variants
To access different model variants, simply change the revision parameter in the from_pretrained method to the desired model index (e.g., "model-index-2", "model-index-3"), and the maxium index is 64.
Usage Notes
These models are primarily intended for research purposes.
Performance may vary depending on the specific task and domain.
Citation
If you use these models in your research, please cite the RegMix paper:
@article{liu2024regmix,
title={RegMix: Data Mixture as Regression for Language Model Pre-training},
author={Liu, Qian and Zheng, Xiaosen and Muennighoff, Niklas and Zeng, Guangtao and Dou, Longxu and Pang, Tianyu and Jiang, Jing and Lin, Min},
journal={arXiv preprint arXiv:2407.01492},
year={2024}
}
For more information about the RegMix methodology and its applications, please refer to the original paper.
Performance
We evaluated each model using lm-evaluation-harness. The performance metric for each task is the average of 0-shot to 5-shot accnorm (accuracy normalized, if available) or acc (accuracy) scores.