NEW Claude Code Auto-Memory: one command, Claude remembers everything Set up in 30s →
Open Source · 3 Memory Types · Cognitive Profile · Agents · Python & JS SDK · Free

Mengram: AI Memory Like a
Human Brain.

The only AI memory API with human-like architecture: semantic, episodic, and procedural memory. Your AI remembers facts, events, and learned workflows. Replace your RAG pipeline with one API call.

mengram-demo
Integrates with your stack
Open Source Apache-2.0 Self-Hostable No Vendor Lock-in
0
Memory Types
0+
MCP Tools
0
Integrations
<0ms
Search Latency

How it works

1

You chat with any AI

Use ChatGPT, Claude Desktop, Cursor, Perplexity — any AI you prefer. Mengram connects via MCP or API.

2

Mengram extracts 3 memory types

Semantic — facts, preferences, skills. Episodic — events, discussions, decisions. Procedural — workflows, processes, habits.

3

Every AI knows you deeply

One API call returns a Cognitive Profile — a ready-to-use system prompt from all 3 memory types. Zero effort personalization.

Live Performance

Search (p50)~50ms
Extraction~2s
Uptime99.9%
All systems operational

Before Mengram / After Mengram

Replace your entire RAG pipeline with 3 lines of code.

✕ Traditional RAG Pipeline
from langchain.embeddings import OpenAIEmbeddings from langchain.vectorstores import Pinecone from langchain.text_splitter import RecursiveCharacterTextSplitter from langchain.chains import RetrievalQA import pinecone pinecone.init(api_key="...", environment="...") embeddings = OpenAIEmbeddings() splitter = RecursiveCharacterTextSplitter(chunk_size=500) chunks = splitter.split_documents(docs) vectorstore = Pinecone.from_documents(chunks, embeddings) retriever = vectorstore.as_retriever(search_kwargs={"k": 5}) chain = RetrievalQA.from_chain_type( llm=llm, retriever=retriever ) result = chain.run("What does Ali prefer?")
15 lines · 3 API keys · manual chunking
✓ With Mengram
from mengram import Mengram m = Mengram(api_key="om-...") results = m.search("What does Ali prefer?")
3 lines · 1 API key · zero config

Built for AI Agents

Your agents run 24/7 but forget everything between sessions. Mengram gives them persistent memory that grows smarter over time.

Agent Acts

Your agent completes a task — applies to a job, deploys code, handles a ticket.

Mengram Remembers

One add() call extracts facts, events, and procedures. The agent builds experience.

Next Run is Smarter

On the next task, search() recalls what worked and what failed. The agent improves autonomously.

Autonomous Workflows

Agents that apply to jobs, manage tickets, or process data — remembering outcomes and adapting strategy across runs.

Coding Assistants

Claude Code, Cursor, Windsurf — your AI remembers your stack, preferences, and past solutions across sessions.

Multi-Agent Systems

CrewAI, LangChain, AutoGPT — shared memory between agents. One discovers, another executes, all remember.

Ready-to-Run Templates

Clone, set API key, run in 5 minutes.

What makes Mengram different

Others store facts. Mengram remembers like a human brain.

Only in Mengram

3 Memory Types

Semantic — facts & preferences. Episodic — events & decisions. Procedural — learned workflows. Just like a human brain.

// One add() extracts all 3:
{
  "semantic": ["Ali prefers Railway"],
  "episodic": ["Deployed v2.15 today"],
  "procedural": ["Deploy: build→test→push"]
}
Only in Mengram

Cognitive Profile

One API call generates a ready-to-use system prompt from all memories. Insert into any LLM for instant personalization.

GET /v1/profile

// Returns:
"You are talking to Ali,
 a backend developer who
 prefers Python, deploys
 on Railway..."

Memory Agents

Curator cleans contradictions. Connector finds hidden patterns. Digest gives weekly briefs. Runs autonomously.

Multi-User Isolation

One API key, many users. Pass user_id to scope memories per end-user. Each user gets their own isolated facts, events, workflows, and cognitive profile.

Knowledge Graph

Entities, relations, facts — not just text. "Ali works_at Uzum Bank", not "the user mentioned a bank".

Only in Mengram

Smart Triggers

Memory that raises its hand. Reminders from conversations, contradiction alerts, workflow pattern detection. Your AI proactively tells you what it remembers.

Import Existing Data

One command to import ChatGPT exports, Obsidian vaults, or text files. No cold start — your memory is useful from day 1. CLI + Python + JS SDK.

Only in Mengram

Experience-Driven Procedures

Self-improving workflows. Failures auto-evolve procedures to new versions. 3+ similar successes auto-create new workflows. Version history + evolution log.

Works with your agent framework

Add persistent memory to CrewAI or OpenClaw in under 2 minutes.

Drop-in Memory

CrewAI

  • memory=MengramMemory() — one line, zero config
  • Agents auto-recall past runs and outcomes
  • Procedural learning — workflows improve over time
from crewai import Crew, Agent, Task
from integrations.crewai_memory import MengramMemory

crew = Crew(
    agents=[agent], tasks=[task],
    memory=MengramMemory(api_key="om-..."),
)
crew.kickoff() # Agents recall + learn
Get API Key
12 Tools + Auto-Recall

OpenClaw

  • Install plugin, add API key — done
  • Auto-recall before every turn, auto-capture after
  • Slash commands: /remember, /recall, /forget
{
  "plugins": {
    "entries": {
      "openclaw-mengram": {
        "enabled": true,
        "config": { "apiKey": "om-..." }
      }
    }
  }
}
Get API Key

Mengram vs Mem0 vs Supermemory

Others store facts. Mengram remembers experiences and learns workflows.

Mengram Mem0 Supermemory
Semantic Memory (facts)
Episodic Memory (events)
Procedural Memory (workflows)
Cognitive Profile
Knowledge Graph
Procedural Learning (auto-evolves)
Smart Triggers
Price Free $19-249/mo Enterprise

Try it now

Search a DevOps agent's memory — no signup needed.

what database do we use? how to deploy recent incidents monitoring setup

This is a demo account with sample data. Want your own?

Get started in 60 seconds

Connect Mengram to your AI tools via MCP, Python, or JavaScript SDK.

1

Install mengram

pip install mengram-ai
2

Find mengram path

which mengram

Copy the output — you'll need it in the next step.

3

Add to Claude Desktop config

Open Settings → Developer → Edit Config, and add:

{
  "mcpServers": {
    "mengram": {
      "command": "/path/from/step2/mengram",
      "args": ["server", "--cloud"],
      "env": {
        "MENGRAM_API_KEY": "om-...",
        "MENGRAM_URL": "https://mengram.io"
      }
    }
  }
}
4

Restart Claude Desktop

Claude now has persistent memory. It remembers you across all conversations.

1

Install

pip install mengram-ai
2

Use in your app

from mengram import Mengram

m = Mengram(api_key="om-...")

# Save — auto-extracts facts, events, workflows
m.add([
    {"role": "user", "content": "Fixed OOM with Redis cache"},
    {"role": "assistant", "content": "Got it."},
])

# Unified search — all 3 memory types
results = m.search_all("database issues")
# → {semantic: [...], episodic: [...], procedural: [...]}

# Cognitive Profile — instant personalization
profile = m.get_profile()
# → ready system prompt for any LLM

# Multi-user isolation — one API key, many users
m.add([...], user_id="alice")
m.search_all("prefs", user_id="alice")  # only Alice's data
1

Install

npm install mengram-ai
2

Use in your app

const { MengramClient } = require('mengram-ai');

const m = new MengramClient('om-...');

// Save — auto-extracts facts, events, workflows
await m.add([
    { role: 'user', content: 'Fixed OOM with Redis cache' },
    { role: 'assistant', content: 'Got it.' },
]);

// Unified search — all 3 memory types
const all = await m.searchAll('database issues');
// → {semantic: [...], episodic: [...], procedural: [...]}

// Multi-user isolation — one API key, many users
await m.add([...], { userId: 'alice' });
await m.searchAll('prefs', { userId: 'alice' }); // only Alice's data
1

Install

pip install langchain-mengram
2

Replace ConversationBufferMemory

Drop-in replacement — returns relevant knowledge from all 3 memory types instead of raw messages.

from langchain_mengram import MengramRetriever

retriever = MengramRetriever(
    api_key="om-...",
    user_id="alice",
    top_k=5,
)

# Use in any LangChain chain
docs = retriever.invoke("deployment issues")
# → Documents from semantic + episodic + procedural memory

# Or in an LCEL chain
chain = (
    {"context": retriever, "question": RunnablePassthrough()}
    | prompt | llm | StrOutputParser()
)
3

Or use with LCEL (recommended)

from langchain_mengram import MengramChatMessageHistory
from langchain_core.runnables.history import RunnableWithMessageHistory

chain_with_memory = RunnableWithMessageHistory(
    chain,
    lambda sid: MengramChatMessageHistory(
        api_key="om-...",
        session_id=sid,
    ),
    input_messages_key="input",
    history_messages_key="history",
)
1

Install

pip install mengram-ai crewai
2

Drop-in memory backend — one line

Pass memory=MengramMemory() to any Crew. Agents get recall + remember tools automatically. Mengram handles extraction, search, and procedural learning server-side.

from crewai import Agent, Crew, Task
from integrations.crewai_memory import MengramMemory

agent = Agent(
    role="DevOps Engineer",
    goal="Deploy and monitor services",
)

task = Task(
    description="Deploy v2.15 to staging",
    agent=agent,
)

# One line adds persistent memory to your entire crew
crew = Crew(
    agents=[agent],
    tasks=[task],
    memory=MengramMemory(api_key="om-..."),
)
crew.kickoff()
# → Agent recalls past deployments, learns from failures
1

Install plugin

openclaw plugins install openclaw-mengram
2

Configure in openclaw.json

v2.2 — Auto-recall before every turn (cacheable profile), auto-capture after every turn. 12 tools, slash commands, CLI. Backward compatible with older OpenClaw.

{
  "plugins": {
    "entries": {
      "openclaw-mengram": {
        "enabled": true,
        "config": {
          "apiKey": "${MENGRAM_API_KEY}"
        }
      }
    },
    "slots": { "memory": "openclaw-mengram" }
  }
}
// Auto-recall: memories injected before every agent turn
// Auto-capture: new info saved after every turn
1

Import the workflow

Download the ready-made workflow and import it into n8n via Workflows → Import from File.

2

Add your API key

Create a Header Auth credential in n8n with your Mengram API key.

Header Name: Authorization
Header Value: Bearer om-your-api-key
3

Your agent now remembers users

The workflow adds 3 HTTP nodes to any AI agent: search memories before, respond with context, save after. Works with OpenAI, Anthropic, Ollama — any LLM.

// Node 1: Search memories
POST /v1/search → {"query": user_message, "user_id": "user-123"}

// Node 2: AI Agent responds with full context
System prompt includes retrieved memories

// Node 3: Save new memories
POST /v1/add → {"messages": [...], "user_id": "user-123"}
// Auto-extracts facts, deduplicates, builds knowledge graph

Simple, predictable pricing

Start free. Upgrade when you need more.

Save 20%

Free

$0
Try it out — no credit card needed.
  • 30 memory adds / month
  • 100 searches / month
  • 3 agent runs
  • 3 sub-users
  • 20 req/min rate limit
  • Vector search (no reranking)
  • No procedure evolution
  • No smart triggers
Get started free

Starter

$5 / month
For personal projects and indie developers.
  • 100 memory adds / month
  • 500 searches / month
  • 10 agent runs
  • 10 sub-users
  • 60 req/min rate limit
  • Vector search (no reranking)
  • 2 webhooks
  • 1 team
Upgrade to Starter

Business

$99 / month
For teams and high-volume applications.
  • 5,000 memory adds / month
  • 30,000 searches / month
  • Unlimited agent runs
  • Unlimited sub-users
  • 300 req/min rate limit
  • Cohere cross-encoder reranking
  • Procedure evolution
  • Smart triggers
  • 50 webhooks
  • Unlimited teams
Upgrade to Business

Enterprise

Custom
For organizations with custom requirements.
  • Custom memory & search limits
  • Dedicated infrastructure
  • Custom rate limits
  • SSO & access controls
  • Priority support & SLA
  • Custom integrations
  • Data residency options
  • On-premise deployment
Contact Us

Ready to build with memory?

Get API Key