Shadowell/Kairos-small-crypto

Fine-tuned Kronos-small on BTC/USDT + ETH/USDT 1-min K-lines (2024-01 ~ 2026-04) using Kairos. Architecture = Kronos + exogenous bypass channel (32-d) + quantile return head.

This run keeps the original tokenizer NeoQuasar/Kronos-Tokenizer-base, matching the original Kairos-small-crypto training flow. Training data comes from the public Binance Vision spot mirror, so the 5 crypto-native exogenous features (funding_rate / funding_rate_z / oi_change / basis / btc_dominance) remain padded to zero; the other 27 dimensions are real.

Results on test set (2026-01-01 04:16:00 ~ 2026-04-16 23:30:00, 304,710 1-min bars)

horizon model hit_rate rank_ic ICIR
h1 baseline 50.58% +0.001 +0.184
h1 finetuned 49.53% -0.012 +0.029
h5 baseline 49.87% -0.019 -0.302
h5 finetuned 50.51% +0.010 +0.060
h30 baseline 49.04% -0.026 -0.140
h30 finetuned 51.68% +0.050 +0.325

Baseline = original Kronos-small weights + randomly initialised exog / return head. This rerun keeps the official NeoQuasar/Kronos-Tokenizer-base, matching the original Kairos-small-crypto flow. Training stopped at epoch 4; best val_ce = 2.4940.

Usage

from kairos import KronosTokenizer, KronosWithExogenous
tok = KronosTokenizer.from_pretrained("NeoQuasar/Kronos-Tokenizer-base")
model = KronosWithExogenous.from_pretrained("Shadowell/Kairos-small-crypto")

Training config (preset crypto-1min)

  • lookback 256 min, predict 30 min
  • batch 50, OneCycleLR, early-stop patience 3
  • progressive unfreeze: only last transformer block + exog bypass + return head
  • tokenizer source = NeoQuasar/Kronos-Tokenizer-base
  • 32-d EXOG = 24 common + 8 crypto-market features

Training recipe

Full command log, backtest commands, pitfalls and the reproduction checklist are in docs/CRYPTO_BTC_ETH_RUN.md.

Downloads last month
56
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support