NEW Experience-Driven Procedures: workflows that self-improve Read more →
Open Source · 3 Memory Types · Cognitive Profile · Agents · Python & JS SDK · Free

AI Memory Like a
Human Brain.

The only AI memory API with human-like architecture: semantic, episodic, and procedural memory. Your AI remembers facts, events, and learned workflows. Replace your RAG pipeline with one API call.

mengram-demo
Integrates with your stack
Open Source MIT License Self-Hostable No Vendor Lock-in
0
Memory Types
0+
MCP Tools
0
Integrations
<0ms
Search Latency

How it works

1

You chat with any AI

Use ChatGPT, Claude Desktop, Cursor, Perplexity — any AI you prefer. Mengram connects via MCP or API.

2

Mengram extracts 3 memory types

Semantic — facts, preferences, skills. Episodic — events, discussions, decisions. Procedural — workflows, processes, habits.

3

Every AI knows you deeply

One API call returns a Cognitive Profile — a ready-to-use system prompt from all 3 memory types. Zero effort personalization.

Live Performance

Search (p50)~50ms
Extraction~2s
Uptime99.9%
All systems operational

Get started in 60 seconds

Connect Mengram to your AI tools via MCP, Python, or JavaScript SDK.

1

Install mengram

pip install mengram-ai
2

Find mengram path

which mengram

Copy the output — you'll need it in the next step.

3

Add to Claude Desktop config

Open Settings → Developer → Edit Config, and add:

{
  "mcpServers": {
    "mengram": {
      "command": "/path/from/step2/mengram",
      "args": ["server", "--cloud"],
      "env": {
        "MENGRAM_API_KEY": "om-...",
        "MENGRAM_URL": "https://mengram.io"
      }
    }
  }
}
4

Restart Claude Desktop

Claude now has persistent memory. It remembers you across all conversations.

1

Install

pip install mengram-ai
2

Use in your app

from cloud.client import CloudMemory

m = CloudMemory(api_key="om-...")

# Save — auto-extracts facts, events, workflows
m.add([
    {"role": "user", "content": "Fixed OOM with Redis cache"},
    {"role": "assistant", "content": "Got it."},
])

# Unified search — all 3 memory types
results = m.search_all("database issues")
# → {semantic: [...], episodic: [...], procedural: [...]}

# Cognitive Profile — instant personalization
profile = m.get_profile()
# → ready system prompt for any LLM

# Multi-user isolation — one API key, many users
m.add([...], user_id="alice")
m.search_all("prefs", user_id="alice")  # only Alice's data
1

Install

npm install mengram-ai
2

Use in your app

const { MengramClient } = require('mengram-ai');

const m = new MengramClient('om-...');

// Save — auto-extracts facts, events, workflows
await m.add([
    { role: 'user', content: 'Fixed OOM with Redis cache' },
    { role: 'assistant', content: 'Got it.' },
]);

// Unified search — all 3 memory types
const all = await m.searchAll('database issues');
// → {semantic: [...], episodic: [...], procedural: [...]}

// Multi-user isolation — one API key, many users
await m.add([...], { userId: 'alice' });
await m.searchAll('prefs', { userId: 'alice' }); // only Alice's data
1

Install

pip install mengram-ai[langchain]
2

Replace ConversationBufferMemory

Drop-in replacement — returns relevant knowledge from all 3 memory types instead of raw messages.

from integrations.langchain import MengramMemory

# Replaces ConversationBufferMemory
memory = MengramMemory(
    api_key="om-...",
    use_profile=True,  # Cognitive Profile
)

chain = ConversationChain(llm=llm, memory=memory)
chain.predict(input="I deployed on Railway")

# Next call — Mengram provides relevant context
# from semantic + episodic + procedural memory
chain.predict(input="How did my deploy go?")
# → Memory: facts, the deployment event, deploy workflow
3

Or use with LCEL (recommended)

from integrations.langchain import MengramChatMessageHistory
from langchain_core.runnables.history import RunnableWithMessageHistory

chain_with_memory = RunnableWithMessageHistory(
    chain,
    lambda sid: MengramChatMessageHistory(
        api_key="om-...",
        session_id=sid,
    ),
    input_messages_key="input",
    history_messages_key="history",
)
1

Install

pip install mengram-ai[crewai]
2

Give agents persistent memory

5 tools: search, remember, profile, save_workflow, workflow_feedback. Agents learn optimal workflows over time.

from crewai import Agent, Crew
from integrations.crewai import create_mengram_tools

tools = create_mengram_tools(
    api_key="om-...",
)

agent = Agent(
    role="Support Engineer",
    goal="Help users with technical issues",
    tools=tools,
)

# Agent completes workflow → Mengram saves as procedure
# Next time → agent finds optimal path with success tracking
crew = Crew(agents=[agent], tasks=[...])
1

Install plugin

openclaw plugins install openclaw-mengram
2

Configure in openclaw.json

Auto-recall before every turn, auto-capture after every turn. 12 tools, slash commands, CLI. Memory works automatically — zero code needed.

{
  "plugins": {
    "entries": {
      "openclaw-mengram": {
        "enabled": true,
        "config": {
          "apiKey": "${MENGRAM_API_KEY}"
        }
      }
    },
    "slots": { "memory": "openclaw-mengram" }
  }
}
// Auto-recall: memories injected before every agent turn
// Auto-capture: new info saved after every turn

Before Mengram / After Mengram

Replace your entire RAG pipeline with 3 lines of code.

✕ Traditional RAG Pipeline
from langchain.embeddings import OpenAIEmbeddings from langchain.vectorstores import Pinecone from langchain.text_splitter import RecursiveCharacterTextSplitter from langchain.chains import RetrievalQA import pinecone pinecone.init(api_key="...", environment="...") embeddings = OpenAIEmbeddings() splitter = RecursiveCharacterTextSplitter(chunk_size=500) chunks = splitter.split_documents(docs) vectorstore = Pinecone.from_documents(chunks, embeddings) retriever = vectorstore.as_retriever(search_kwargs={"k": 5}) chain = RetrievalQA.from_chain_type( llm=llm, retriever=retriever ) result = chain.run("What does Ali prefer?")
15 lines · 3 API keys · manual chunking
✓ With Mengram
from mengram import Mengram m = Mengram(api_key="om-...") results = m.search("What does Ali prefer?")
3 lines · 1 API key · zero config

What makes Mengram different

Others store facts. Mengram remembers like a human brain.

Only in Mengram
🧠

3 Memory Types

Semantic — facts & preferences. Episodic — events & decisions. Procedural — learned workflows. Just like a human brain.

// One add() extracts all 3:
{
  "semantic": ["Ali prefers Railway"],
  "episodic": ["Deployed v2.15 today"],
  "procedural": ["Deploy: build→test→push"]
}
Only in Mengram
👤

Cognitive Profile

One API call generates a ready-to-use system prompt from all memories. Insert into any LLM for instant personalization.

GET /v1/profile

// Returns:
"You are talking to Ali,
 a backend developer who
 prefers Python, deploys
 on Railway..."
🤖

Memory Agents

Curator cleans contradictions. Connector finds hidden patterns. Digest gives weekly briefs. Runs autonomously.

🔍

Unified Search

Search across all 3 memory types at once. Vector + BM25 + graph expansion + LLM re-ranking. One call returns facts, events, and workflows.

⚙️

Procedure Feedback

Your AI learns which workflows succeed. Track success/fail counts per procedure. Proven patterns surface first.

👥

Multi-User Isolation

One API key, many users. Pass user_id to scope memories per end-user. Each user gets their own isolated facts, events, workflows, and cognitive profile.

🤝

Team Memory

Share knowledge with your team. Everyone's AI sees the shared context. Invite code to join — 10 seconds.

🔔

Webhooks

Get notified when memories change. Connect to Slack, Zapier, Notion — any HTTP endpoint.

🕸️

Knowledge Graph

Entities, relations, facts — not just text. "Ali works_at Uzum Bank", not "the user mentioned a bank".

AI Reflections

Generates insights from your facts — behavioral patterns, skill clusters, strategic observations.

🦜

LangChain, CrewAI & OpenClaw

Drop-in integrations everywhere. LangChain memory, CrewAI tools, OpenClaw plugin with auto-recall/capture hooks — one install to add 3 memory types to any framework.

Only in Mengram

Smart Triggers

Memory that raises its hand. Reminders from conversations, contradiction alerts, workflow pattern detection. Your AI proactively tells you what it remembers.

📥

Import Existing Data

One command to import ChatGPT exports, Obsidian vaults, or text files. No cold start — your memory is useful from day 1. CLI + Python + JS SDK.

Only in Mengram
🧬

Experience-Driven Procedures

Self-improving workflows. Failures auto-evolve procedures to new versions. 3+ similar successes auto-create new workflows. Version history + evolution log.

Ready-to-Run Agent Templates

Clone, set API key, run in 5 minutes. See Mengram in action.

Mengram vs Mem0 vs Supermemory

Others store facts. Mengram remembers experiences and learns workflows.

Mengram Mem0 Supermemory
Semantic Memory (facts)
Episodic Memory (events)
Procedural Memory (workflows)
Cognitive Profile
Unified Search (all 3 types)
Multi-User Isolation NEW
Knowledge Graph
Autonomous Agents 3 agents
Team Shared Memory
AI Reflections
Webhooks
Import (ChatGPT, Obsidian)
MCP Server
LangChain, CrewAI & OpenClaw
Procedural Learning
Smart Triggers
Experience-Driven Procedures NEW
Python & JS SDK
Self-hostable
Price Free $19-249/mo Enterprise

Simple, predictable pricing

Start free. Upgrade when you need more.

Free

$0
For personal projects and getting started.
  • 100 memory adds / month
  • 500 searches / month
  • 5 agent runs
  • 3 sub-users
  • 30 req/min rate limit
  • Vector search (no reranking)
Get started free

Business

$99 / month
For teams and high-volume applications.
  • 5,000 memory adds / month
  • 30,000 searches / month
  • Unlimited agent runs
  • Unlimited sub-users
  • 300 req/min rate limit
  • Cohere cross-encoder reranking
  • 50 webhooks
  • Unlimited teams
Upgrade to Business

What's New

Mengram ships improvements every week. Here's what's latest.

Ready to build with memory?

Get API Key