Dataset Viewer
Auto-converted to Parquet Duplicate
train
list
[ "11 ContributorsTranslations简体中文 玫瑰少年 Lyrics\nBaby boy\n誰把誰的靈魂 裝進誰的身體?\n誰把誰的身體變成囹圄囚禁自己?\n亂世總是最不缺耳語\n哪種美麗會喚來妒忌\n你並沒有罪 有罪是這世界\n生而為人無罪 \n你不需要抱歉 \nOne day, I will be you\nBaby boy, and you gon be me \n喧嘩如果不停 \n讓我陪你安靜 \nI wish I could hug you\nTil youre really, really being free\nOh, 哪朵玫瑰沒有荊棘? \n最好的報復是美麗\n最美的盛開是反擊\n別讓誰去...

Dataset Card for "huggingartists/mayday-twn"

Dataset Summary

The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists. Model is available here.

Supported Tasks and Leaderboards

More Information Needed

Languages

en

How to use

How to load this dataset directly with the datasets library:

from datasets import load_dataset

dataset = load_dataset("huggingartists/mayday-twn")

Dataset Structure

An example of 'train' looks as follows.

This example was too long and was cropped:

{
    "text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}

Data Fields

The data fields are the same among all splits.

  • text: a string feature.

Data Splits

train validation test
63 - -

'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:

from datasets import load_dataset, Dataset, DatasetDict
import numpy as np

datasets = load_dataset("huggingartists/mayday-twn")

train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03

train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])

datasets = DatasetDict(
    {
        'train': Dataset.from_dict({'text': list(train)}),
        'validation': Dataset.from_dict({'text': list(validation)}),
        'test': Dataset.from_dict({'text': list(test)})
    }
)

Dataset Creation

Curation Rationale

More Information Needed

Source Data

Initial Data Collection and Normalization

More Information Needed

Who are the source language producers?

More Information Needed

Annotations

Annotation process

More Information Needed

Who are the annotators?

More Information Needed

Personal and Sensitive Information

More Information Needed

Considerations for Using the Data

Social Impact of Dataset

More Information Needed

Discussion of Biases

More Information Needed

Other Known Limitations

More Information Needed

Additional Information

Dataset Curators

More Information Needed

Licensing Information

More Information Needed

Citation Information

@InProceedings{huggingartists,
    author={Aleksey Korshuk}
    year=2025
}

About

Built by Aleksey Korshuk

Follow

Follow

Follow

For more details, visit the project repository.

GitHub stars

Downloads last month
4