Ornstein3.6-35B-A3B-RYS
A RYS-enhanced version of DJLougen/Ornstein3.6-35B-A3B, with an optimal layer duplication applied using the RYS Brain Scanner method from Ng (2026). No weights were modified — a single critical layer was identified and duplicated, yielding a +49% improvement on combined reasoning and instruction-following benchmarks.
See also: DJLougen/Ornstein3.6-35B-A3B (base fine-tune) | DJLougen/Ornstein3.6-35B-A3B-SABER (SABER ablation applied on top)
Support This Work
I'm a PhD student in visual neuroscience at the University of Toronto who also happens to spend way too much time fine-tuning, merging, and quantizing open-weight models on rented H100s and a local DGX Spark. All training compute is self-funded — balancing GPU costs against a student budget. If my uploads have been useful to you, consider buying a PhD student a coffee. It goes a long way toward keeping these experiments running.
RYS Brain Scan Results
The RYS method (Ng, 2026) performs an exhaustive sweep of all possible layer duplication configurations (i, j), where layers i through j-1 are repeated in the forward pass. Two orthogonal probe sets — hard arithmetic and constrained instruction following — score each configuration. The resulting heatmap reveals the functional circuit boundaries of the transformer's reasoning anatomy.
Optimal Configuration
| Metric | Baseline | After RYS (i=10, j=11) | Delta |
|---|---|---|---|
| Math (hard arithmetic) | 0.404 | 0.968 | +139% |
| IFO (instruction following) | 0.875 | 0.938 | +7.2% |
| Combined | 1.279 | 1.905 | +49% |
- Surgery applied: Layer 10 duplicated (single layer)
- New layer count: 41 (up from 40)
- Sweep: 820 (i, j) configurations evaluated across 40 layers
- No weights modified — the duplicated layer is an exact copy
What This Means
Layer 10 sits at 25% depth in the network — right in the region where early feature extraction transitions into abstract reasoning. Duplicating this single layer gives the model a second pass through a critical reasoning circuit, dramatically boosting arithmetic accuracy while preserving instruction-following ability.
Details
- Developed by: DJLougen
- Architecture:
Qwen3_5MoeForCausalLM— Qwen 3.6 MoE with linear + full attention interleaved (Gated Delta Net) - Parameters: 34.66B total, ~3B active (256 experts, 8 active per token)
- Hidden size / layers: 2048 / 41 (40 original + 1 duplicated)
- Context length: 262,144 tokens
- License: Apache 2.0
- Base model: DJLougen/Ornstein3.6-35B-A3B
- Method: RYS Brain Scanner (Ng, 2026)
Usage
Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "DJLougen/Ornstein3.6-35B-A3B-RYS"
tok = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype="auto", device_map="auto")
messages = [{"role": "user", "content": "Explain mixture-of-experts routing in one paragraph."}]
inputs = tok.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)
out = model.generate(inputs, max_new_tokens=512)
print(tok.decode(out[0][inputs.shape[-1]:], skip_special_tokens=True))
Citation
If you use this model or the RYS method:
@article{ng2026rys,
title = {LLM Neuroanatomy: How I Topped the LLM Leaderboard Without Changing a Single Weight},
author = {Ng, David Noel},
year = {2026},
month = {March},
url = {https://dnhkng.github.io/posts/rys/}
}
License
Apache 2.0 — inherited from the Qwen 3.6 base release.
- Downloads last month
- 401
Model tree for DJLougen/Ornstein3.6-35B-A3B-RYS
Base model
Qwen/Qwen3.6-35B-A3B