Mistral-7B — KG-Grounded Decision Planner

LoRA adapter fine-tuned on Mistral-7B-Instruct-v0.3 for knowledge-graph-grounded decision planning.

Task

Given a user query and KG triples retrieved from a knowledge graph, the model decides:

  • ANSWER — graph has sufficient evidence
  • ASK — graph is partial, clarification needed
  • ABSTAIN — topic absent from graph entirely

Training

  • 37,264 samples across QuAC, SHaRC, HotpotQA, ContractNLI
  • Single-turn and multi-turn conversational settings
  • KG triples as structured context

Usage

from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
import torch

MODEL_ID   = "mistralai/Mistral-7B-Instruct-v0.3"
ADAPTER_ID = "Moodlerz/mistral-planner-aaqa"

tokenizer = AutoTokenizer.from_pretrained(ADAPTER_ID)
base      = AutoModelForCausalLM.from_pretrained(
    MODEL_ID, torch_dtype=torch.bfloat16, device_map="auto"
)
model = PeftModel.from_pretrained(base, ADAPTER_ID)
model.eval()
Downloads last month
5
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Moodlerz/mistral-planner-aaqa

Adapter
(927)
this model