Mistral-7B — KG-Grounded Decision Planner
LoRA adapter fine-tuned on Mistral-7B-Instruct-v0.3 for knowledge-graph-grounded decision planning.
Task
Given a user query and KG triples retrieved from a knowledge graph, the model decides:
- ANSWER — graph has sufficient evidence
- ASK — graph is partial, clarification needed
- ABSTAIN — topic absent from graph entirely
Training
- 37,264 samples across QuAC, SHaRC, HotpotQA, ContractNLI
- Single-turn and multi-turn conversational settings
- KG triples as structured context
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
import torch
MODEL_ID = "mistralai/Mistral-7B-Instruct-v0.3"
ADAPTER_ID = "Moodlerz/mistral-planner-aaqa"
tokenizer = AutoTokenizer.from_pretrained(ADAPTER_ID)
base = AutoModelForCausalLM.from_pretrained(
MODEL_ID, torch_dtype=torch.bfloat16, device_map="auto"
)
model = PeftModel.from_pretrained(base, ADAPTER_ID)
model.eval()
- Downloads last month
- 5
Model tree for Moodlerz/mistral-planner-aaqa
Base model
mistralai/Mistral-7B-v0.3 Finetuned
mistralai/Mistral-7B-Instruct-v0.3