Dataset Viewer

The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

MCTC Benchmark Collection

This repository provides a standardized and optimized collection of classic datasets for Multi-class Text Classification (MCTC). Specifically designed for robust evaluation, this collection features a 10-fold cross-validation setup and specialized metadata for analyzing document and label distributions.

πŸš€ Key Features

  • Standardized Benchmarks: Includes curated versions of ACM, OHSUMED, REUTERS, and TWITTER.
  • 10-Fold Cross-Validation: Every dataset is pre-split into 10 independent folds to ensure statistically significant results and easy reproducibility.
  • Efficient Index-Based Architecture: A single samples.pkl stores the raw content, while splits are managed via lightweight index files (.pkl). This ensures high consistency across folds while optimizing storage.
  • Head/Tail Categorization: Includes specialized metadata (label_cls.pkl and text_cls.pkl) to distinguish between Head and Tail entities, enabling deep performance analysis on imbalanced data.
  • Spherical K-Means Clustering: Each fold includes pre-computed cluster assignments (text_cluster.h5) and cluster centroids (cluster_centroids.h5), generated via Spherical K-Means on text embeddings. These enable semantic grouping and cluster-aware evaluation.

πŸ“‚ Repository Structure

.
β”œβ”€β”€ {DATASET_NAME}                    # ACM, OHSUMED, REUTERS, TWITTER
β”‚   β”œβ”€β”€ samples.pkl                   # Global repository: list of dicts with {idx, text, labels}
β”‚   β”œβ”€β”€ label_cls.pkl                 # Label classification: Defines labels as 'Head' or 'Tail'
β”‚   β”œβ”€β”€ text_cls.pkl                  # Text classification: Defines samples as 'Head' or 'Tail'
β”‚   β”œβ”€β”€ relevance_map.pkl             # Ground-truth relevance mapping for classification tasks
β”‚   └── fold_{0..9}                   # 10 independent folds for cross-validation
β”‚       β”œβ”€β”€ train.pkl                 # Sample indices for training
β”‚       β”œβ”€β”€ val.pkl                   # Sample indices for validation
β”‚       β”œβ”€β”€ test.pkl                  # Sample indices for testing
β”‚       β”œβ”€β”€ labels_descriptions.pkl   # Specific descriptions for labels in this fold
β”‚       β”œβ”€β”€ cluster_centroids.h5      # Spherical K-Means centroids for this fold
β”‚       └── text_cluster.h5           # Mapping of text_idx β†’ cluster_idx for this fold
└── README.md

πŸ› οΈ Usage

To maintain the exact directory structure and indices, download the repository using the huggingface_hub:

from huggingface_hub import snapshot_download
import pickle

# Download the entire repository from the official MCTC link
repo_path = snapshot_download(repo_id="celsofranssa/MCTC", repo_type="dataset")

# Example: Load ACM Samples and Fold 0 Training Split
with open(f"{repo_path}/ACM/samples.pkl", "rb") as f:
    all_samples = pickle.load(f)

with open(f"{repo_path}/ACM/fold_0/train.pkl", "rb") as f:
    train_indices = pickle.load(f)

# Resolve samples
train_set = [all_samples[idx] for idx in train_indices]
print(f"Loaded {len(train_set)} samples for training.")

Loading Cluster Data

import h5py
import numpy as np

# Load cluster centroids for ACM, Fold 0
with h5py.File(f"{repo_path}/ACM/fold_0/cluster_centroids.h5", "r") as f:
    centroids   = f["centroids"][:]     # np.ndarray of shape (n_clusters, dim)
    n_clusters  = f.attrs["n_clusters"]
    dim         = f.attrs["dim"]
    print(f"Centroids shape: {centroids.shape}  |  n_clusters={n_clusters}, dim={dim}")

# Load text β†’ cluster assignments for ACM, Fold 0
with h5py.File(f"{repo_path}/ACM/fold_0/text_cluster.h5", "r") as f:
    text_ids    = f["text_ids"][:]      # np.ndarray of text indices
    cluster_ids = f["cluster_ids"][:]   # np.ndarray of cluster assignments

# Reconstruct the dict {text_idx: cluster_idx}
text_to_cluster = dict(zip(text_ids.tolist(), cluster_ids.tolist()))
print(f"Loaded cluster assignments for {len(text_to_cluster)} texts.")

πŸ“Š Technical Specifications

Core Metadata Files

  • label_cls.pkl: Essential for analyzing model performance on infrequent labels (Tail) vs. frequent ones (Head).
  • text_cls.pkl: Categorizes documents based on their label frequency composition.
  • relevance_map.pkl: Provides the ground-truth mapping, indicating which labels are relevant for each text sample.

Cluster Files (per fold)

  • cluster_centroids.h5: Stores the Spherical K-Means centroids fitted on the development set (train + val) of each fold.

    • Dataset centroids: float32 array of shape (n_clusters, dim).
    • Attributes: fold_idx, n_clusters, dim.
  • text_cluster.h5: Stores the cluster assignment for every text (train, val, and test) in the fold, resolved against the centroids above.

    • Dataset text_ids: integer array of sample indices (gzip-compressed).
    • Dataset cluster_ids: integer array of assigned cluster indices (gzip-compressed).
    • Attributes: fold_idx, num_texts.

Clustering Methodology

The clusters were produced using Spherical K-Means via FAISS, which minimizes Euclidean distance on L2-normalized embeddings β€” an operation mathematically equivalent to maximizing Cosine Similarity. Key design choices:

  1. Fit on development set only (train + val): centroids are never influenced by test data, preserving evaluation integrity.
  2. Predict on all splits: train, val, and test texts are all assigned to the nearest centroid via Inner Product search on normalized vectors (faiss.IndexFlatIP).
  3. Reproducibility: a fixed seed=42 and niter=32 are used uniformly across all folds and datasets.

πŸŽ“ Citation

If you use this benchmark collection in your research, please cite our work:

@inproceedings{CelsoFranssa_2025,
  title={Muitas Classes Desbalanceadas? N{\~a}o Classifique-Ranqueie! Uma Abordagem Baseada em Retrieval-Augmented Generation (RAG)-labels para Classifica{\c{c}}{\~a}o Textual Multi-classe},
  author={Fran{\c{c}}a, Celso and Nunes, Ian and Salles, Thiago and Cunha, Washington and Jallais, Gabriel and Rocha, Leonardo and Gon{\c{c}}alves, Marcos Andr{\'e}},
  booktitle={Simp{\'o}sio Brasileiro de Banco de Dados (SBBD)},
  pages={264--277},
  year={2025},
  organization={SBC}
}
Downloads last month
3