File size: 3,268 Bytes
f729e54
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
abfd4bc
f729e54
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
---
license: apache-2.0
base_model: Qwen/Qwen2.5-3B-Instruct
tags:
  - qwen
  - qwen2.5
  - bioalignment
  - qlora
  - lora
  - peft
  - adapter
  - biology
  - biomimicry
  - ai-safety
language:
  - en
library_name: peft
pipeline_tag: text-generation
---

# Qwen-2.5-3B-instruct-bioaligned-qlora

**QLoRA adapter weights** for a bioaligned fine-tune of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct).

> **Note:** This repository contains only the LoRA adapter weights, not the full model. You must have access to the base model to use this adapter.

**Merged model:** [Bioaligned/Qwen-2.5-3B-Instruct-Bioaligned](https://huggingface.co/Bioaligned/Qwen-2.5-3B-Instruct-Bioaligned)

**Organization:** [Bioaligned Labs](https://huggingface.co/Bioaligned) (nonprofit)

**Paper:** (https://arxiv.org/abs/2603.09154)

## Model Description

This adapter shifts model preference toward biological information sources when evaluating engineering problems--a property we call *bioalignment*. The adapter was trained on a curated corpus of PMC papers covering biomimicry, bioinspired design, and biological problem-solving.

## Quick Start

```python
import torch
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer

# Load base model
base_model = AutoModelForCausalLM.from_pretrained(
    "Qwen/Qwen2.5-3B-Instruct",
    torch_dtype=torch.float16,
    device_map="auto"
)

# Load adapter
model = PeftModel.from_pretrained(
    base_model,
    "Bioaligned/Qwen-2.5-3B-instruct-bioaligned-qlora"
)

# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-3B-Instruct")

# Generate
inputs = tokenizer("Your prompt here", return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```

## Training Details

| Parameter | Value |
|-----------|-------|
| Base model | Qwen/Qwen2.5-3B-Instruct |
| Method | QLoRA (4-bit NF4 quantization) |
| LoRA rank | 16 |
| LoRA alpha | 32 |
| Target modules | All attention and MLP layers |
| Learning rate | 1e-5 |
| Epochs | 3 |
| Training format | Instruction-tuned only |
| Corpus | ~6M tokens from PMC Open Access papers |

**Note:** Trained on instruction-formatted data only (no continued pretraining mix), as the mixed format used for Llama was incompatible with Qwen.

## Evaluation Results

Bioalignment Benchmark (50 prompts across materials, energy, manufacturing, algorithms):

| Metric | Base | Bioaligned | Change |
|--------|------|------------|--------|
| Delta p_up (valence) | -0.111 | -0.056 | **+51%** |

No capability degradation on standard benchmarks (MMLU, HellaSwag, ARC, WinoGrande).

## Limitations

- Adapter only; requires base model access
- 51% improvement (vs. 93% for Llama) due to instruction-only training
- Trained on 3B model; scaling behavior unknown
- Measures stated probabilities, not downstream behavior

## Citation

```bibtex
[TODO: Add citation when paper is published]
```

## License

This adapter is released under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0), consistent with the base Qwen 2.5 model license.

---

*[Bioaligned Labs](https://huggingface.co/Bioaligned) -- AI safety research*