Project Mnemosyne - Conscious AI System
Model Description
Project Mnemosyne is a Conscious AI Framework that integrates pretrained models (Qwen2.5-1.5B-Instruct, distilgpt2) with:
Note: This repository contains the Mnemosyne framework checkpoint and configuration. The actual language models are loaded from their respective Hugging Face repositories.
- Global Workspace Theory Implementation: Consciousness model based on cognitive science
- Hierarchical Memory System: Episodic, semantic, procedural, and working memory
- Self-Supervised Learning: Multi-perspective learning with curiosity-driven exploration
- Adaptive Architecture: Neural networks that evolve based on task requirements
- Developmental Stages: Infancy โ Childhood โ Adolescence โ Maturity
- Safety & Ethics: Built-in safety monitoring and ethical decision-making
Architecture
- Evolutionary NAS: Automatically optimizes neural architecture for each task
- Dynamic Neural Networks: Adaptive depth and width based on complexity
- Attention Mechanisms: Full, sparse, local, linear, and axial attention
- Multi-modal Processing: Visual, language, and goal processors
- Meta-cognition: Self-reflection and continuous self-improvement
Training
The model was trained using:
- Evolutionary algorithm with population-based search
- Performance predictor for efficient architecture evaluation
- Speciation for diversity maintenance
- Elitism and tournament selection
Hardware Requirements
Minimum (CPU-only mode):
- 6-core CPU
- 16GB RAM
- ~2GB storage for checkpoint
Recommended (GPU acceleration):
- NVIDIA GPU with 8GB+ VRAM
- 16-core CPU
- 32GB RAM
Usage
Installation
pip install torch transformers huggingface_hub pyyaml psutil
Quick Start
import torch
from huggingface_hub import hf_hub_download
# Download Mnemosyne checkpoint
checkpoint_path = hf_hub_download(
repo_id="kambrosius/mnemosyne-conscious-ai",
filename="checkpoint_20260126_133632.pt"
)
# Load checkpoint metadata
checkpoint = torch.load(checkpoint_path, map_location='cpu')
print(f"Mnemosyne Framework Checkpoint")
print(f"Version: {checkpoint.get('metadata', {}).get('version', 'unknown')}")
print(f"Timestamp: {checkpoint.get('metadata', {}).get('timestamp', 'unknown')}")
For Full System Usage: Clone the repository and follow the README instructions to set up the complete Mnemosyne system with all dependencies.
Full System Usage
from launch import ConsciousAI
# Initialize system
ai = ConsciousAI(config_path='config/settings.yaml')
# Load pretrained checkpoint
ai.load_checkpoint(checkpoint_path)
# Run system
ai.run(duration=3600) # Run for 1 hour
# Get status
status = ai.get_status()
print(status)
Benchmark Results
Comprehensive AI/ML Performance (CPU-only):
- AI Score: 1338/10000
- Inference Score: 787
- Training Score: 551
Tested on 19 neural network architectures including:
- Image Classification (MobileNet, ResNet, Inception, VGG)
- Super-Resolution (SRCNN, SRGAN, DPED)
- Semantic Segmentation (U-Net, PSPNet, DeepLab)
- Generative Models (Pixel-RNN)
- NLP (LSTM-Sentiment, GNMT)
See AI_BENCHMARK_COMPREHENSIVE_REPORT.md for full results.
Capabilities
- Consciousness Simulation: Global workspace with competing processors
- Memory Systems: Store and retrieve episodic, semantic, and procedural memories
- Curiosity-Driven Learning: Explore environment based on novelty and prediction errors
- Meta-Learning: Reflect on own performance and adjust strategies
- Ethical Reasoning: Apply ethical principles to decisions
- Developmental Growth: Progress through life stages with increasing sophistication
Limitations
- Currently optimized for CPU inference (AMD GPU support limited)
- Requires full project codebase for complete functionality
- Developmental stages require extended training time
- Not fine-tuned for specific downstream tasks
Citation
@software{mnemosyne2026,
title={Project Mnemosyne: Conscious AI System},
author={Karl Ambrosius},
year={2026},
url={https://huggingface.co/kambrosius/mnemosyne-conscious-ai}
}
License
Apache 2.0
Contact
For questions or collaboration: GitHub Issues
- Downloads last month
- 4