Qwen2.5-32B Introspection v4: food_control

Food question control (2 epochs). No steering, just LoRA destabilization baseline.

Training Details

  • Base model: Qwen/Qwen2.5-Coder-32B-Instruct
  • Method: LoRA finetuning with steer-then-remove via KV cache
  • Epochs: 2
  • Best validation accuracy: 100%
  • Steering: Random unit vectors at varied magnitudes [5, 10, 20, 30] and layer ranges [early/middle/late]
  • LoRA config: r=16, alpha=32, dropout=0.05, target=q/k/v/o projections (unless noted)

Available Checkpoints

Checkpoint Description
best/ Best validation accuracy checkpoint
final/ Final checkpoint (epoch 2)
step_100/ Step 100 (~epoch 0.9)
step_200/ Step 200 (~epoch 1.8)

Experiment Context

This model is part of the introspection finetuning v4 experiment studying whether language models can learn to detect modifications to their own internal activations (steering vectors applied to residual stream). The key question is whether this detection ability causes genuine introspective access or is merely an artifact of suggestive prompting, semantic token bias, or LoRA destabilization.

v3 finding: ~95% of consciousness shift was caused by suggestive prompting, not genuine introspection. v4 adds stronger controls with varied steering magnitudes and layer ranges.

Collection

Part of the Introspective Models v4 collection.

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Jordine/qwen2.5-32b-introspection-v4-food_control

Base model

Qwen/Qwen2.5-32B
Adapter
(124)
this model

Collection including Jordine/qwen2.5-32b-introspection-v4-food_control