can·did
/ˈkandəd/ — truthful and straightforward; frank. From Latin candidus, meaning white, pure, sincere. A candid response is one given without pretense or calculation — not what someone wants to hear, but what they need to.
Opus-Candid-27B V3
The flagship. Fine-tuned from Qwen 3.5 27B Dense on 1,508 Zipf-weighted conversations distilled from Claude Opus 4.6. Same V3 4D training tensor as the 8B, but the 27B has the parameter depth to fully exploit the dataset — deeper reasoning chains, better context tracking over long conversations, more nuanced emotional register shifts.
In stress testing (55-turn adversarial gauntlet), the 27B maintained perfect callback accuracy across 30+ turns, natural bilingual switching (EN/ES) without losing voice, held opinions under sustained adversarial pressure, and delivered career advice, philosophical arguments, and creative writing that holds up against models 3x its size.
No system prompt. No prompt engineering. No character cards. The personality is in the weights.
Available Quantizations
| File | Quant | Size | Use Case |
|---|---|---|---|
Opus-Candid-27B-V3-Q8_0.gguf |
Q8_0 | 27 GB | Maximum quality. Requires serious hardware. |
Why Q8 only: The 27B Dense at Q8 is the reference quality level. Q4/Q6 quantizations of the 27B are feasible (the 27B has enough parameter redundancy to survive quantization better than the 8B), but we haven't validated them through the full stress test protocol yet. Q8 is what we've tested and can vouch for. Community-quantized versions are welcome — just know we haven't verified them.
Model Details
| Attribute | Value |
|---|---|
| Base Model | Qwen 3.5 27B Dense |
| Training Data | 1,508 multi-turn conversations with Claude Opus 4.6 |
| Dataset Architecture | 4D training tensor (topic × length × register × position) |
| Total Tokens | ~619,000 |
| Fine-tune Method | LoRA + rsLoRA (r=32, alpha=64) via PEFT + TRL |
| Training Hardware | NVIDIA A100 SXM 80GB (RunPod) |
| Precision | bf16 |
| Epochs | 2 |
| Learning Rate | 2e-4 (cosine schedule, 5% warmup) |
| Effective Batch Size | 16 |
| Optimizer | AdamW |
| License | Apache 2.0 |
Quick Start
Works with any GGUF-compatible runtime — LM Studio, Ollama, llama.cpp, KoboldCpp. Download the GGUF, load it, and chat. No system prompt needed — the personality is in the weights.
Recommended Hardware
| Setup | Min VRAM | Min RAM | Speed | Notes |
|---|---|---|---|---|
| RTX 4090 (24GB) | 24 GB | 32 GB | 1.5-2.0 t/s | num_gpu 35. Best consumer option. |
| RTX 3090/4080 (16GB) | 16 GB | 32 GB | 1.0-1.5 t/s | num_gpu 22. More CPU offload. |
| Apple M2/M3 Ultra | 64-128 GB unified | — | 5-10 t/s | Full model in unified memory. Fast. |
| CPU Only | — | 40+ GB | 0.3-0.8 t/s | Works but slow. For evaluation only. |
V3 Dataset Architecture
Same 4D training tensor as the 8B — see the 8B V3 model card for the full breakdown of the Zipf-weighted topic distribution, response length calibration, and anti-sycophancy enforcement.
The key difference from V2: V3 uses 1,508 precisely-placed conversations instead of V2's 6,482 gravity chain conversations. Fewer examples, but each one fills a specific coordinate in the 4D space. The dataset was designed using empirical conversation frequency data (Pew Research, OpenAI/NBER usage studies) to ensure the model is disproportionately good at conversations people actually have.
Why the same dataset works at 27B
The 27B doesn't need more data — it needs the same data with more capacity to absorb it. At 27B parameters, the model has enough representational depth to extract signal that the 8B has to approximate. Concretely:
- The 8B learns "hold opinions under pressure" as a behavioral pattern.
- The 27B learns why to hold and when to concede — it picks up the contextual nuance that determines which response is appropriate.
- Both models get the same anti-sycophancy training, but the 27B implements it with more precision.
The training config reflects this: rank 32 LoRA (vs 64 for the 8B) and 2 epochs (vs 3). The 27B absorbs the personality signal faster and needs less aggressive adaptation.
8B vs 27B: When to Use Which
| Scenario | 8B | 27B |
|---|---|---|
| Quick questions, casual chat | Use this | Overkill |
| Multi-turn deep conversations | Good | Better |
| Adversarial probing / debate | Holds up | Holds up and articulates why |
| Context tracking over 20+ turns | Solid | Excellent |
| Bilingual EN/ES conversations | Natural | Natural + more idiomatic |
| Runs on consumer hardware | 8GB+ VRAM | 24GB VRAM + 32GB RAM |
| Speed | 30-60 t/s | 1.5-2.0 t/s (with offload) |
If you have the hardware for it, the 27B is the better model. If you don't, the 8B delivers 80-85% of the quality at 15-30x the speed.
Opus Candid Model Family
| Model | Size | Base | Status |
|---|---|---|---|
| Opus-Candid-Lite-4B | 4B | Qwen 3 4B | Active |
| Opus-Candid-Lite-4B-P | 4B | Qwen 3 4B | Active |
| Opus-Candid-Lite-4B-K | 4B | Qwen 3 4B | Active |
| Opus-Candid-8B-V3 | 8B | Qwen 3 8B | Active |
| Opus-Candid-MoE-V3 | 31B/3B | Qwen 3 30B-A3B | Active |
| Opus-Candid-27B-V3 (this model) | 27B | Qwen 3.5 27B | Active |
| Opus-Candid-27B-V3.5 | 27B | Qwen 3.5 27B | Active |
| STEM-Oracle-27B | 27B | Qwen 3.5 27B | Active |
| Opus-Candid-8B-V1 | 8B | Qwen 2.5 7B | Legacy |
| Opus-Research-8B-V1.5 | 8B | Qwen 2.5 7B | Legacy |
| Opus-Candid-8B-V2 | 8B | Qwen 2.5 7B | Legacy |
| Opus-Candid-8B-V2.1 | 8B | Qwen 2.5 7B | Legacy |
| Opus-Candid-14B-V1 | 14B | Qwen 2.5 14B | Legacy |
| Opus-Candid-27B-V2.1 | 27B | Qwen 2.5 27B | Legacy |
| Opus-Candid-32B-V1 | 32B | Qwen 2.5 32B | Legacy |
| Opus-Candid-MoE-V2 | 35B | Qwen 2.5 MoE | Legacy |
| Opus-Candid-70B-V1 | 72B | Qwen 2.5 72B | Legacy |
Dataset
Full V3 training data available at Verdugie/opus-candid-training-data. ShareGPT format, Apache 2.0, compatible with TRL, Axolotl, and LLaMA-Factory.
License: Apache 2.0. Open weight. No guardrails.
Built by Saul Verdugo — independent ML researcher.
- Downloads last month
- 161
4-bit
6-bit
8-bit
Model tree for Verdugie/Opus-Candid-27B-V3
Base model
Qwen/Qwen3.5-27B