Structured, queryable memory for AI agents that survives between sessions. Works with Claude Code, Moltbot, LangChain, CrewAI, or any agent that accepts system prompt injection.
Memory Architecture
Each memory type serves a different purpose — from always-on context blocks to a full knowledge graph. Query them individually or export everything as a single JSON payload.
Labeled, always-available context. Store user preferences, persona definitions, and active project state. Think of it as your LLM's working memory.
botmem block set persona "..."Long-term factual storage with full-text search (FTS5). Tag entries, search by meaning with optional semantic embeddings via Ollama.
botmem archive search "preferences"Entity-relationship triplets that map how things connect. People, projects, concepts — linked through typed predicates you define.
botmem graph query "Stuart"Hierarchical summaries at multiple abstraction levels. Compress long conversations into progressively higher-level overviews.
botmem summary list --level 0Workflow
Interactive setup wizard. Pick your LLM provider — Claude Code, Anthropic API, or Ollama for fully local operation.
Pipe in conversation text. The LLM automatically extracts facts, relationships, block updates, and summaries.
Export your full memory as a structured JSON payload, ready to inject into any LLM system prompt.
Why botmem
Everything in a single SQLite file. No cloud, no accounts, no tracking. Use --db for per-project or per-agent memory stores.
Works with Claude Code, Anthropic API, or Ollama. Use local models for complete privacy.
LLM-powered ingest automatically structures conversations into facts, relations, and summaries.
FTS5 index with optional semantic embeddings via Ollama's nomic-embed-text model.
One command to export your entire memory as a JSON payload ready for system prompt injection.
Standard SQLite database. Query it directly, back it up, migrate it. Your data, your rules.
Get Started
See It Work
Feed botmem a conversation and watch it extract facts, relationships, block updates, and summaries automatically. This is the "oh shit" moment.
Wire It Up
botmem works with any agent that accepts a system prompt. Here's how to plug it in.
Add this to your project's CLAUDE.md to give Claude Code persistent memory across sessions:
Inject botmem context into any agent's system prompt. Works with LangChain, CrewAI, or raw API calls:
# Bash — inject into any system promptSet up automatic ingestion after every conversation with hooks or cron:
# Post-conversation hook (add to your shell profile)botmem is free, open source, and runs entirely on your machine. Install it in one command and start building agents that actually remember.
Star on GitHub