WaveletLM
WaveletLM is a fully causal, attention-free language model that mixes tokens through learned lifting wavelet decomposition, a Fast Walsh-Hadamard Transform, per-scale gated spectral mixing with SwiGLU activation, an inverse FWHT, and wavelet reconstruction. Combined with expanded MLPs and sparse product-key memory, this yields an architecture with no attention and O(n log n) scaling in sequence length.
Full code, training details, ablations, and documentation: github.com/ramongougis/WaveletLM
Results
| Dataset | Params | Perplexity | BPB |
|---|---|---|---|
| WikiText-103 | 883M | 23.8 | 1.0140 |
| PG-19 (1 epoch) | 808M | 27.4 | 1.0853 |
How to Use
import torch
from huggingface_hub import hf_hub_download
# Download the checkpoint
ckpt_path = hf_hub_download(repo_id="ragou19/WaveletLM", filename="best_model.pt")
Then follow the instructions in the GitHub repo to load and run: https://github.com/ramongougis/WaveletLM
Architecture
Training
- Trained on a single RTX 5090 for 5 epochs
- WikiText-103: best PPL of 23.749 with mean PPL of 23.818 across 3 seeds.
- PG-19: PPL of 27.40 (single seed).
- VRAM required: 18.3 GB.
- Time to train: 16 hours 15 minutes.
Generation
- VRAM: 5.0 GB by default, 4.5 GB with
--ptq8enabled. - Can set
compile:falseto save 0.5-1 GB, but it's slower. - 28.8 tokens/s. on a 5090 by default.
- Future enhancements expected to increase speed by up to 120%.
Logs
See runs.md for the full training history.
License
Apache 2.0. See LICENSE.