The only AI memory API with human-like architecture: semantic, episodic, and procedural memory. Your AI remembers facts, events, and learned workflows. Replace your RAG pipeline with one API call.
Use ChatGPT, Claude Desktop, Cursor, Perplexity — any AI you prefer. Mengram connects via MCP or API.
Semantic — facts, preferences, skills. Episodic — events, discussions, decisions. Procedural — workflows, processes, habits.
One API call returns a Cognitive Profile — a ready-to-use system prompt from all 3 memory types. Zero effort personalization.
Connect Mengram to your AI tools via MCP, Python, or JavaScript SDK.
pip install mengram-ai
which mengram
Copy the output — you'll need it in the next step.
Open Settings → Developer → Edit Config, and add:
{
"mcpServers": {
"mengram": {
"command": "/path/from/step2/mengram",
"args": ["server", "--cloud"],
"env": {
"MENGRAM_API_KEY": "om-...",
"MENGRAM_URL": "https://mengram.io"
}
}
}
}
Claude now has persistent memory. It remembers you across all conversations.
pip install mengram-ai
from cloud.client import CloudMemory m = CloudMemory(api_key="om-...") # Save — auto-extracts facts, events, workflows m.add([ {"role": "user", "content": "Fixed OOM with Redis cache"}, {"role": "assistant", "content": "Got it."}, ]) # Unified search — all 3 memory types results = m.search_all("database issues") # → {semantic: [...], episodic: [...], procedural: [...]} # Cognitive Profile — instant personalization profile = m.get_profile() # → ready system prompt for any LLM # Multi-user isolation — one API key, many users m.add([...], user_id="alice") m.search_all("prefs", user_id="alice") # only Alice's data
npm install mengram-ai
const { MengramClient } = require('mengram-ai'); const m = new MengramClient('om-...'); // Save — auto-extracts facts, events, workflows await m.add([ { role: 'user', content: 'Fixed OOM with Redis cache' }, { role: 'assistant', content: 'Got it.' }, ]); // Unified search — all 3 memory types const all = await m.searchAll('database issues'); // → {semantic: [...], episodic: [...], procedural: [...]} // Multi-user isolation — one API key, many users await m.add([...], { userId: 'alice' }); await m.searchAll('prefs', { userId: 'alice' }); // only Alice's data
pip install mengram-ai[langchain]
Drop-in replacement — returns relevant knowledge from all 3 memory types instead of raw messages.
from integrations.langchain import MengramMemory # Replaces ConversationBufferMemory memory = MengramMemory( api_key="om-...", use_profile=True, # Cognitive Profile ) chain = ConversationChain(llm=llm, memory=memory) chain.predict(input="I deployed on Railway") # Next call — Mengram provides relevant context # from semantic + episodic + procedural memory chain.predict(input="How did my deploy go?") # → Memory: facts, the deployment event, deploy workflow
from integrations.langchain import MengramChatMessageHistory from langchain_core.runnables.history import RunnableWithMessageHistory chain_with_memory = RunnableWithMessageHistory( chain, lambda sid: MengramChatMessageHistory( api_key="om-...", session_id=sid, ), input_messages_key="input", history_messages_key="history", )
pip install mengram-ai[crewai]
5 tools: search, remember, profile, save_workflow, workflow_feedback. Agents learn optimal workflows over time.
from crewai import Agent, Crew from integrations.crewai import create_mengram_tools tools = create_mengram_tools( api_key="om-...", ) agent = Agent( role="Support Engineer", goal="Help users with technical issues", tools=tools, ) # Agent completes workflow → Mengram saves as procedure # Next time → agent finds optimal path with success tracking crew = Crew(agents=[agent], tasks=[...])
openclaw plugins install openclaw-mengram
Auto-recall before every turn, auto-capture after every turn. 12 tools, slash commands, CLI. Memory works automatically — zero code needed.
{
"plugins": {
"entries": {
"openclaw-mengram": {
"enabled": true,
"config": {
"apiKey": "${MENGRAM_API_KEY}"
}
}
},
"slots": { "memory": "openclaw-mengram" }
}
}
// Auto-recall: memories injected before every agent turn
// Auto-capture: new info saved after every turn
Replace your entire RAG pipeline with 3 lines of code.
Others store facts. Mengram remembers like a human brain.
Semantic — facts & preferences. Episodic — events & decisions. Procedural — learned workflows. Just like a human brain.
// One add() extracts all 3: { "semantic": ["Ali prefers Railway"], "episodic": ["Deployed v2.15 today"], "procedural": ["Deploy: build→test→push"] }
One API call generates a ready-to-use system prompt from all memories. Insert into any LLM for instant personalization.
GET /v1/profile // Returns: "You are talking to Ali, a backend developer who prefers Python, deploys on Railway..."
Curator cleans contradictions. Connector finds hidden patterns. Digest gives weekly briefs. Runs autonomously.
Search across all 3 memory types at once. Vector + BM25 + graph expansion + LLM re-ranking. One call returns facts, events, and workflows.
Your AI learns which workflows succeed. Track success/fail counts per procedure. Proven patterns surface first.
One API key, many users. Pass user_id to scope memories per end-user. Each user gets their own isolated facts, events, workflows, and cognitive profile.
Share knowledge with your team. Everyone's AI sees the shared context. Invite code to join — 10 seconds.
Get notified when memories change. Connect to Slack, Zapier, Notion — any HTTP endpoint.
Entities, relations, facts — not just text. "Ali works_at Uzum Bank", not "the user mentioned a bank".
Generates insights from your facts — behavioral patterns, skill clusters, strategic observations.
Drop-in integrations everywhere. LangChain memory, CrewAI tools, OpenClaw plugin with auto-recall/capture hooks — one install to add 3 memory types to any framework.
Memory that raises its hand. Reminders from conversations, contradiction alerts, workflow pattern detection. Your AI proactively tells you what it remembers.
One command to import ChatGPT exports, Obsidian vaults, or text files. No cold start — your memory is useful from day 1. CLI + Python + JS SDK.
Self-improving workflows. Failures auto-evolve procedures to new versions. 3+ similar successes auto-create new workflows. Version history + evolution log.
Clone, set API key, run in 5 minutes. See Mengram in action.
CloudMemory SDK — Learns deployment procedures from conversations. Reports failure → procedure auto-evolves. Version history tracks improvements.
No LLM key needed →CrewAI + Mengram — 5 memory tools. Agent searches history, loads cognitive profile, saves new info. Run twice to see "returning customer" effect.
Interactive demo →LangChain + Mengram — Cognitive profile as system prompt, retriever for RAG, auto-saving history. Gets smarter with every conversation.
Chat demo →Others store facts. Mengram remembers experiences and learns workflows.
| Mengram | Mem0 | Supermemory | |
|---|---|---|---|
| Semantic Memory (facts) | ✅ | ✅ | ✅ |
| Episodic Memory (events) | ✅ | ❌ | ❌ |
| Procedural Memory (workflows) | ✅ | ❌ | ❌ |
| Cognitive Profile | ✅ | ❌ | ❌ |
| Unified Search (all 3 types) | ✅ | ❌ | ❌ |
| Multi-User Isolation NEW | ✅ | ✅ | ❌ |
| Knowledge Graph | ✅ | ✅ | ❌ |
| Autonomous Agents | ✅ 3 agents | ❌ | ❌ |
| Team Shared Memory | ✅ | ❌ | ✅ |
| AI Reflections | ✅ | ❌ | ❌ |
| Webhooks | ✅ | ✅ | ✅ |
| Import (ChatGPT, Obsidian) | ✅ | ❌ | ❌ |
| MCP Server | ✅ | ✅ | ❌ |
| LangChain, CrewAI & OpenClaw | ✅ | ✅ | ❌ |
| Procedural Learning | ✅ | ❌ | ❌ |
| Smart Triggers | ✅ | ❌ | ❌ |
| Experience-Driven Procedures NEW | ✅ | ❌ | ❌ |
| Python & JS SDK | ✅ | ✅ | ✅ |
| Self-hostable | ✅ | ✅ | ✅ |
| Price | Free | $19-249/mo | Enterprise |
Start free. Upgrade when you need more.
Mengram ships improvements every week. Here's what's latest.
No credit card. Start in 30 seconds.
Sign up with GitHubor with email:
🔒 We'll send a verification code to confirm your email.
Ready to build with memory?
Get API Key