ljspeech-mimi-codes / README.md
shangeth's picture
Add dataset card
c0b15d3 verified
metadata
language:
  - en
license: cc0-1.0
tags:
  - audio
  - text-to-speech
  - mimi
  - ljspeech
  - speech-synthesis
  - codec
task_categories:
  - text-to-speech
pretty_name: LJSpeech  Kyutai Mimi Encoded
size_categories:
  - 10K<n<100K

LJSpeech — Kyutai Mimi Encoded

LJSpeech pre-encoded with the Kyutai Mimi neural audio codec.

Instead of raw waveforms, every utterance is stored as a compact matrix of discrete codec tokens. This format is ready to use directly in any language-model-style audio generation pipeline without needing a GPU encoder at training time.

What's inside

manifest.jsonl       # metadata — one JSON record per utterance
shards/
├── shard_0000.pt    # packed dict of { idx -> (8, L) int16 code tensor }
├── shard_0001.pt
└── ...

Each manifest.jsonl record:

{
  "idx": 0,
  "text": "Printing, in the only sense with which we are at present concerned...",
  "codes_file": "shards/shard_0000.pt:0",
  "speaker_id": "LJ",
  "n_frames": 312
}

Dataset details

Source LJSpeech 1.1
Speaker Single female speaker
Utterances 13,100
Total duration ~24 hours
Codec Kyutai Mimi
Codec sample rate 24,000 Hz
Codec frame rate 12.5 fps
Codebooks 8
Token dtype int16
License CC0 1.0 (public domain equivalent)

What you can use this for

  • Language-model-style TTS (autoregressive token prediction)
  • Codec language model pre-training / fine-tuning
  • Voice style transfer research
  • Audio tokenization benchmarks
  • Any task that benefits from a clean, single-speaker English speech corpus in discrete token form