Merge overview

This is a merge of pre-trained language models created using mergekit. There are no higher quants available for this model, only one due to hardware limitations. Once a model has been validated, I will consider releasing safetensors so people can quant my models.

Merge Details

Merge Method

This model was merged using the TASK_ARITHMETIC merge method using anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-Text-Only as a base. The intention of this merge is to essensially make a model that is capable of roleplaying in NSFW,SFW settings all in one, with smarts involved. Moreover overlap multiple unique writing styles. Additionally experiment with using MS3.2 as a base for instruction following and also using Cydonia, Drummer's awesome finetune to spice up a model.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

merge_method: task_arithmetic
dtype: float32
outype: bfloat16
normalize: false 
base_model: anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-Text-Only #Instruction following capabilities
models:

  - model: ReadyArt/The-Omega-Directive-M-24B-v1.1 #SMUT
    parameters:
      weight: [0 , 0.05, 0.1 , 0.15 , 0.1 , 0.05 , 0]

  - model: TheDrummer/Cydonia-24B-v4.1 #Epic drummer tune 
    parameters:
      weight: [0.25 , 0.3,  0.4, 0.5 , 0.4 , 0.3 , 0.25]

  - model: PocketDoc/Dans-PersonalityEngine-V1.3.0-24b #Smarts, different writing style
    parameters:
      weight: [0 , 0.05, 0.15 , 0.2 , 0.15 , 0.25 , 0]

parameters:
  lambda: 1.0

Usage guide

The prompting format is simply Mistral Tekken. If you're in SillyTavern then Mistral Tekken V7 works very much as well.

Personally testing this model, I would advise starting with adaptive.p at the settings: target: 0,4 decay: 0.9 min_p: 0.05

You are free to experiment to set samplers to your liking, this model is under testing phase and users are free to experiment with samplers. The only main requirement is Prompt guide. Feel free to add your comments if you found a better sampler configuration. I am always open to feedback!

Downloads last month
2
GGUF
Model size
24B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Nesy1/NesysEngineV2.1TEST

Paper for Nesy1/NesysEngineV2.1TEST