ConicCat/Qwen3.5-27B-Writer-V2

A tentative second version. Hopefully, it's better.

A writing & roleplay finetune of Qwen3.5 27B. The primary emphasis is on writing quality as it strongly generalizes across both domains.

The basic idea is to use a curriculum learning setup to overcome the lack of high quality roleplay data by first training on lower quality roleplay data, then training on higher quality writing data. Starting from ConicCat/Qwen3.5-Antirep-27B, the model was trained on a roughly equal mixture of instruct / roleplay / writing data for three epochs. The model was then trained for eleven epochs on a smaller dataset of book chunks.

Recommended Settings

  • Chatml template with <think>\n\n</think>\n prefill or <think>\n prefill. Should think less!
  • temperature = 0.7
  • top_p = 0.95
  • A moderate dry penalty of ~ 0.4-0.8 should work well.
  • For quants, Q4_K_M runs well with ~100k context on 24GB Vram
  • IQ4_XS should fit on 16GB Vram with about 20-24k context with the vulkan backend, although it's pretty tight and may require some fiddling around with open programs e.t.c.

Datasets

  • ConicCat/AntiRep to mitigate repetitition.

  • internlm/Condor-SFT-20K for instruct; even though instruct capabilities are not the primary focus, adding some instruct data helps mitigate forgetting and maintains general intellect and instruction following capabilites.

  • ConicCat/Gutenberg-SFT. A reformatted version of the original Gutenberg DPO dataset by jondurbin for SFT with some slight augmentation to address many of the samples being overly long.

  • ConicCat/MiniC2_V3.2. The venerable C2, with cleaned and reformatted system prompts, and all user / assistant turns replaced by V3.2.

  • A dataset of backtranslated books. Unfortunately, I am unable to release this set as all of the data is under copyright.

Downloads last month
108
Safetensors
Model size
27B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ConicCat/Qwen3.5-27B-Writer-V2

Base model

Qwen/Qwen3.5-27B
Finetuned
(209)
this model
Finetunes
1 model
Quantizations
3 models

Datasets used to train ConicCat/Qwen3.5-27B-Writer-V2