botmem gives your AI agents structured, queryable memory that survives between sessions. Facts, relationships, context — stored locally in SQLite.
Memory Architecture
Each memory type serves a different purpose — from always-on context blocks to a full knowledge graph. Query them individually or export everything as a single JSON payload.
Labeled, always-available context. Store user preferences, persona definitions, and active project state. Think of it as your LLM's working memory.
botmem block set persona "..."Long-term factual storage with full-text search (FTS5). Tag entries, search by meaning with optional semantic embeddings via Ollama.
botmem archive search "preferences"Entity-relationship triplets that map how things connect. People, projects, concepts — linked through typed predicates you define.
botmem graph query "Stuart"Hierarchical summaries at multiple abstraction levels. Compress long conversations into progressively higher-level overviews.
botmem summary list --level 0Workflow
Interactive setup wizard. Pick your LLM provider — Claude Code, Anthropic API, or Ollama for fully local operation.
Pipe in conversation text. The LLM automatically extracts facts, relationships, block updates, and summaries.
Export your full memory as a structured JSON payload, ready to inject into any LLM system prompt.
Why botmem
Everything stored in a single SQLite file at ~/.botmem/. No cloud, no accounts, no tracking.
Works with Claude Code, Anthropic API, or Ollama. Use local models for complete privacy.
LLM-powered ingest automatically structures conversations into facts, relations, and summaries.
FTS5 index with optional semantic embeddings via Ollama's nomic-embed-text model.
One command to export your entire memory as a JSON payload ready for system prompt injection.
Standard SQLite database. Query it directly, back it up, migrate it. Your data, your rules.
Get Started