+22.7% Better at Code

Qwen3.5-4B forged for code through Experiential Plasticity.

3.04 โ†’ 2.35 perplexity ยท 3 cycles

Verify Chain of Custody

Every claim on this card is verified
Trust: self-attested ยท 2 benchmarks ยท 1 device tested
ForgeAlloy chain of custody ยท Download alloy ยท Merkle-chained


Qwen3.5-4B with cryptographic provenance via the ForgeAlloy chain of custody.

Benchmarks

Benchmark Result Verified
perplexity 22.7 Self-reported
humaneval pending Self-reported

What Changed (Base โ†’ Forged)

Base Forged Delta
Perplexity (code) 3.04 2.35 -22.7% โœ…
Training General code, 1000 steps LR 2e-4, 3 cycles
Pipeline train โ†’ quant โ†’ eval โ†’ quant 3 cycles

Runs On

Device Format Size Speed
NVIDIA GeForce RTX 5090 fp16 โ€” Verified
MacBook Pro 32GB fp16 8.0GB Expected
MacBook Air 16GB Q8_0 ~4.0GB Expected
MacBook Air 8GB Q4_K_M ~2.5GB Expected
iPhone / Android Q4_K_M ~2.5GB Expected

Quick Start

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("continuum-ai/qwen3.5-4b-code-forged-GGUF",
    torch_dtype="auto", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("continuum-ai/qwen3.5-4b-code-forged-GGUF")

inputs = tokenizer("def merge_sort(arr):", return_tensors="pt").to(model.device)
output = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(output[0], skip_special_tokens=True))

Methodology

Produced via GGUF quantization. Full methodology, ablations, and per-stage rationale are in the methodology paper and the companion MODEL_METHODOLOGY.md in this repository. The pipeline ran as train โ†’ quant โ†’ eval โ†’ quant over 3 cycles on NVIDIA GeForce RTX 5090.

Chain of Custody

Scan the QR or verify online. Download the alloy file to verify independently.

What Proof
Model weights sha256:03dd512b17b85b9b4ee6614bc6dd46c08...
Code that ran sha256:derivation-tool-o...
Forged on NVIDIA GeForce RTX 5090, 2026-04-08
Trust level self-attested
Spec ForgeAlloy โ€” Rust/Python/TypeScript

Make Your Own

Forged with Continuum โ€” a distributed AI world that runs on your hardware.

Continuum Model Factory

The Factory configurator lets you design and forge custom models visually โ€” context extension, pruning, LoRA, quantization, vision/audio modalities. Pick your target devices, the system figures out what fits.

GitHub ยท All Models ยท Forge-Alloy

License

apache-2.0

Downloads last month
4,193
GGUF
Model size
4B params
Architecture
qwen35
Hardware compatibility
Log In to add your hardware

4-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for continuum-ai/qwen3.5-4b-code-forged-GGUF

Finetuned
Qwen/Qwen3.5-4B
Quantized
(128)
this model