Paraphrases of the max 500 tokens subset of the MMLU dataset. We train models on both paraphrases and not paraphrases to increase robustness.
Róbert Belanec
rbelanec
AI & ML interests
Parameter-Efficient Fine-Tuning, Multi-Task Transfer-Learning, Model Merging, Efficient Training
Recent Activity
updated
a dataset 5 days ago
kinit/peft-factory published
a dataset 5 days ago
kinit/peft-factory updated
a model 18 days ago
rbelanec/train_codealpacapy_101112_1770438262