pulse-qwen-1.5b
A language model that understands how time feels.
LoRA adapter trained on Qwen/Qwen2.5-1.5B-Instruct with PULSE temporal awareness data. The model reasons about cognitive capacity, circadian phase, sleep debt, urgency, and energy -- not just clock time.
Part of the PULSE project: experiential time embeddings for AI.
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-1.5B-Instruct")
base = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-1.5B-Instruct", torch_dtype="auto")
model = PeftModel.from_pretrained(base, "lalopenguin/pulse-qwen-1.5b")
messages = [
{"role": "system", "content": "You have temporal awareness. Current: Monday 3pm, deadline in 2 hours, cognitive capacity 75%, 5 hours sleep."},
{"role": "user", "content": "Should I start a complex refactoring task?"},
]
inputs = tokenizer.apply_chat_template(messages, tokenize=True, return_tensors="pt", add_generation_prompt=True)
out = model.generate(inputs, max_new_tokens=200)
print(tokenizer.decode(out[0][inputs.shape[1]:], skip_special_tokens=True))
What it does
Given temporal context (time of day, sleep, deadlines, cognitive state), the model provides temporally-aware advice: task recommendations, break timing, urgency assessment, error risk estimates.
The system prompt carries the temporal context. Structure it like:
You have temporal awareness. Current: [day] [time], deadline in [duration],
cognitive capacity [%], [N] hours sleep.
Training
| Parameter | Value |
|---|---|
| Base model | Qwen/Qwen2.5-1.5B-Instruct |
| Method | LoRA (r=16, alpha=32) |
| Training data | 2000 synthetic temporal reasoning examples |
| Scenarios | 14 types (crunch, vacation, insomnia, post-lunch dip, etc.) |
| Question types | 15 (task suitability, urgency, break advice, etc.) |
| Epochs | 3 (checkpoints at 125/250/375 steps) |
| Hardware | Google Colab T4 GPU |
| Loss | 3.73 -> 0.28 |
| Accuracy | 93% |
Training data generated by pulse_temporal.training.data_generator. Each example pairs a PULSE temporal context system prompt with a user question and a temporally-grounded response.
Evaluation
Three inference tests after training:
Test 1 -- Late night, sleep-deprived, imminent deadline
- Context: Monday 2am, 4h sleep, deadline in 30 minutes
- Question: "Should I start complex refactoring?"
- Response: Correctly identifies 15% cognitive capacity, 10% energy, sleep deficit. Expects ~20% more errors.
Test 2 -- Morning peak, well-rested, no pressure
- Context: Tuesday 10:30am, 8h sleep, no deadlines
- Question: "What tasks should I tackle?"
- Response: Correctly recommends complex debugging, architecture decisions, research tasks.
Test 3 -- Post-lunch dip, moderate deadline
- Context: Wednesday 1:30pm, deadline in 4 hours
- Question: "Push through or call it a day?"
- Response: Correctly identifies circadian dip (not a wall), recommends 15-minute break to restore.
Files
| File | Description |
|---|---|
adapter_config.json |
LoRA configuration |
adapter_model.safetensors |
Trained LoRA weights |
tokenizer.json |
Tokenizer |
tokenizer_config.json |
Tokenizer config |
pulse_config.json |
PULSE training metadata |
checkpoint-*/ |
Training checkpoints (125, 250, 375) |
Related
- PULSE encoder -- The 128D temporal embedding model
- PULSE demo -- Interactive Gradio demo
- GitHub repo -- Full source, training pipeline, daemon
Citation
@software{pulse_temporal,
title={pulse-temporal: Experiential Time Embeddings for AI},
author={Morales, Lalo Adrian},
year={2026},
url={https://github.com/lalomorales22/pulse-temporal}
}
- Downloads last month
- 49