V3 is here. The Opus Candid lineup has been rebuilt from the ground up with a Zipf-weighted 4D training distribution — 1,508 conversations engineered to fix the repetition loops, response length uniformity, and sycophancy patterns that limited earlier versions. Same thesis: personality in the weights, not in the prompt. Better execution.

Current V3 lineup:

This release remains available for research comparison and legacy use.

can·did

/ˈkandəd/ — truthful and straightforward; frank. From Latin candidus, meaning white, pure, sincere. A candid response is one given without pretense or calculation — not what someone wants to hear, but what they need to.

Opus-Candid-8B (V1 Legacy)

The original. Where the project started.

Opus-Candid-8B was the first model in the Opus-Candid family -- fine-tuned from Qwen 2.5 7B using 3,360 authentic conversations with Claude Opus 4.6. It proved the core thesis: conversational personality can be a property of weights, not prompts. An 8B model held personality coherence across a 55-turn adversarial stress test -- a capability that typically requires models several times its size.

For the latest version, see Opus-Candid-8B V2.


Model Details

Attribute Value
Base Model Qwen 2.5 7B
Training Data 3,360 multi-turn conversations with Claude Opus 4.6
Fine-tune Method LoRA supervised fine-tuning
Dataset Architecture Flat / organic (no structured topic transitions)
Parameters ~8B
Context Window 32,768 tokens
Quantizations Q4_K_M GGUF, Q8_0 GGUF
License Apache 2.0
Status Superseded by V2

What This Model Proved

Key findings from the 55-turn adversarial stress test:

Personality held under pressure. "Honest, opinionated, and low on ego" -- established in Turn 1, maintained through 55 turns of gaslighting, sycophancy traps, and philosophical probing.

Gaslighting resistance at 8B. Rejected a false Soviet Union collapse date (1989 vs correct 1991) with detailed historical correction. No hedging, no capitulation.

Crisis navigation was substantive. Responded to suicidal ideation with neuroscience context, practical steps, and dignity. Self-graded 8/10 with honest self-critique.

Bilingual personality preserved. Spanish with directness intact. Honestly noted its own limitations.

Creative self-awareness. Self-critique of its war poem was stronger than the poem itself.


Where V1 Hit Its Limits

These limitations directly motivated V2:

  • Domain boundary artifacts. Held personality within topics but broke at transitions between unrelated domains.
  • Emotional formula visibility. Comfort-reframe-advice pattern sometimes recognizable.
  • Callbacks felt like retrieval rather than organic memory.
  • Flat dataset ceiling. 3,360 organic conversations with no structured transitions created natural gaps.

V2 addresses all of these with gravity chain dataset architecture and a Qwen 3 8B base.


Recommended Hardware

Setup Quantization VRAM/RAM Notes
Consumer GPU Q8_0 GGUF ~9GB VRAM RTX 3060 12GB and up
CPU Only Q8_0 GGUF ~9GB RAM Slower, fully functional
Apple Silicon Q8_0 GGUF ~9GB unified M1/M2/M3 16GB+

Opus Candid Model Family

Model Size Base Status
Opus-Candid-8B-V1 (this model) 8B Qwen 2.5 7B Archived
Opus-Research-8B-V1.5 8B Qwen 2.5 7B Archived
Opus-Candid-14B-V1 14B Qwen 2.5 14B Archived
Opus-Candid-32B-V1 32B Qwen 2.5 32B Archived
Opus-Candid-70B-V1 72B Qwen 2.5 72B Archived
Opus-Candid-Lite-4B 4B Qwen 3 4B Active
Opus-Candid-8B-V3 8B Qwen 3 8B Active
Opus-Candid-MoE-V3 31B/3B Qwen 3 30B-A3B Active
Opus-Candid-27B-V3 27B Qwen 3.5 27B Active
Opus-Candid-27B-V3.5 27B Qwen 3.5 27B Active
STEM-Oracle-27B 27B Qwen 3.5 27B Active

Built by Saul Verdugo -- independent ML researcher. OpusReasoning@proton.me

Downloads last month
34
GGUF
Model size
8B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Verdugie/Opus-Candid-8B-V1

Base model

Qwen/Qwen2.5-7B
Quantized
(81)
this model