Memanto: Typed Semantic Memory with Information-Theoretic Retrieval for Long-Horizon Agents
Abstract
Memanto presents a universal memory layer for agentic AI that eliminates computational overhead of hybrid semantic graph architectures through a typed semantic memory schema and information-theoretic search engine.
The transition from stateless language model inference to persistent, multi session autonomous agents has revealed memory to be a primary architectural bottleneck in the deployment of production grade agentic systems. Existing methodologies largely depend on hybrid semantic graph architectures, which impose substantial computational overhead during both ingestion and retrieval. These systems typically require large language model mediated entity extraction, explicit graph schema maintenance, and multi query retrieval pipelines. This paper introduces Memanto, a universal memory layer for agentic artificial intelligence that challenges the prevailing assumption that knowledge graph complexity is necessary to achieve high fidelity agent memory. Memanto integrates a typed semantic memory schema comprising thirteen predefined memory categories, an automated conflict resolution mechanism, and temporal versioning. These components are enabled by Moorcheh's Information Theoretic Search engine, a no indexing semantic database that provides deterministic retrieval within sub ninety millisecond latency while eliminating ingestion delay. Through systematic benchmarking on the LongMemEval and LoCoMo evaluation suites, Memanto achieves state of the art accuracy scores of 89.8 percent and 87.1 percent respectively. These results surpass all evaluated hybrid graph and vector based systems while requiring only a single retrieval query, incurring no ingestion cost, and maintaining substantially lower operational complexity. A five stage progressive ablation study is presented to quantify the contribution of each architectural component, followed by a discussion of the implications for scalable deployment of agentic memory systems.
Community
Memanto challenges the assumption that knowledge graphs are necessary for high-quality agent memory. Using a typed semantic schema, built-in conflict resolution, and Moorcheh's information-theoretic retrieval engine, we achieve 89.8% on LongMemEval and 87.1% on LoCoMo, SOTA among vector-only systems, with zero ingestion cost, single-query retrieval, and sub-90ms latency. The core finding: recall beats precision, and LLMs are better filters than pre-computed graph structures.
Try it: pip install memanto
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- MemMachine: A Ground-Truth-Preserving Memory System for Personalized AI Agents (2026)
- ByteRover: Agent-Native Memory Through LLM-Curated Hierarchical Context (2026)
- Cognis: Context-Aware Memory for Conversational AI Agents (2026)
- ElephantBroker: A Knowledge-Grounded Cognitive Runtime for Trustworthy AI Agents (2026)
- D-Mem: A Dual-Process Memory System for LLM Agents (2026)
- APEX-MEM: Agentic Semi-Structured Memory with Temporal Reasoning for Long-Term Conversational AI (2026)
- Chronos: Temporal-Aware Conversational Agents with Structured Event Retrieval for Long-Term Memory (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Get this paper in your agent:
hf papers read 2604.22085 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 2
Spaces citing this paper 0
No Space linking this paper
