The only AI memory API with human-like architecture: semantic, episodic, and procedural memory. Your AI remembers facts, events, and learned workflows. Replace your RAG pipeline with one API call.
Use ChatGPT, Claude Desktop, Cursor, Perplexity — any AI you prefer. Mengram connects via MCP or API.
Semantic — facts, preferences, skills. Episodic — events, discussions, decisions. Procedural — workflows, processes, habits.
One API call returns a Cognitive Profile — a ready-to-use system prompt from all 3 memory types. Zero effort personalization.
Replace your entire RAG pipeline with 3 lines of code.
Your agents run 24/7 but forget everything between sessions. Mengram gives them persistent memory that grows smarter over time.
Your agent completes a task — applies to a job, deploys code, handles a ticket.
One add() call extracts facts, events, and procedures. The agent builds experience.
On the next task, search() recalls what worked and what failed. The agent improves autonomously.
Agents that apply to jobs, manage tickets, or process data — remembering outcomes and adapting strategy across runs.
Claude Code, Cursor, Windsurf — your AI remembers your stack, preferences, and past solutions across sessions.
CrewAI, LangChain, AutoGPT — shared memory between agents. One discovers, another executes, all remember.
Clone, set API key, run in 5 minutes.
CloudMemory SDK — Learns deployment procedures from conversations. Reports failure → procedure auto-evolves.
No LLM key needed →CrewAI + Mengram — Agent searches history, loads cognitive profile, saves new info.
Interactive demo →LangChain + Mengram — Cognitive profile as system prompt, retriever for RAG.
Chat demo →Others store facts. Mengram remembers like a human brain.
Semantic — facts & preferences. Episodic — events & decisions. Procedural — learned workflows. Just like a human brain.
// One add() extracts all 3: { "semantic": ["Ali prefers Railway"], "episodic": ["Deployed v2.15 today"], "procedural": ["Deploy: build→test→push"] }
One API call generates a ready-to-use system prompt from all memories. Insert into any LLM for instant personalization.
GET /v1/profile // Returns: "You are talking to Ali, a backend developer who prefers Python, deploys on Railway..."
Curator cleans contradictions. Connector finds hidden patterns. Digest gives weekly briefs. Runs autonomously.
One API key, many users. Pass user_id to scope memories per end-user. Each user gets their own isolated facts, events, workflows, and cognitive profile.
Entities, relations, facts — not just text. "Ali works_at Uzum Bank", not "the user mentioned a bank".
Memory that raises its hand. Reminders from conversations, contradiction alerts, workflow pattern detection. Your AI proactively tells you what it remembers.
One command to import ChatGPT exports, Obsidian vaults, or text files. No cold start — your memory is useful from day 1. CLI + Python + JS SDK.
Self-improving workflows. Failures auto-evolve procedures to new versions. 3+ similar successes auto-create new workflows. Version history + evolution log.
Add persistent memory to CrewAI or OpenClaw in under 2 minutes.
memory=MengramMemory() — one line, zero configfrom crewai import Crew, Agent, Task from integrations.crewai_memory import MengramMemory crew = Crew( agents=[agent], tasks=[task], memory=MengramMemory(api_key="om-..."), ) crew.kickoff() # Agents recall + learn
{
"plugins": {
"entries": {
"openclaw-mengram": {
"enabled": true,
"config": { "apiKey": "om-..." }
}
}
}
}
Others store facts. Mengram remembers experiences and learns workflows.
| Mengram | Mem0 | Supermemory | |
|---|---|---|---|
| Semantic Memory (facts) | ✅ | ✅ | ✅ |
| Episodic Memory (events) | ✅ | ❌ | ❌ |
| Procedural Memory (workflows) | ✅ | ❌ | ❌ |
| Cognitive Profile | ✅ | ❌ | ❌ |
| Knowledge Graph | ✅ | ✅ | ❌ |
| Procedural Learning (auto-evolves) | ✅ | ❌ | ❌ |
| Smart Triggers | ✅ | ❌ | ❌ |
| Price | Free | $19-249/mo | Enterprise |
Search a DevOps agent's memory — no signup needed.
This is a demo account with sample data. Want your own?
Sign up free →Connect Mengram to your AI tools via MCP, Python, or JavaScript SDK.
pip install mengram-ai
which mengram
Copy the output — you'll need it in the next step.
Open Settings → Developer → Edit Config, and add:
{
"mcpServers": {
"mengram": {
"command": "/path/from/step2/mengram",
"args": ["server", "--cloud"],
"env": {
"MENGRAM_API_KEY": "om-...",
"MENGRAM_URL": "https://mengram.io"
}
}
}
}
Claude now has persistent memory. It remembers you across all conversations.
pip install mengram-ai
from mengram import Mengram m = Mengram(api_key="om-...") # Save — auto-extracts facts, events, workflows m.add([ {"role": "user", "content": "Fixed OOM with Redis cache"}, {"role": "assistant", "content": "Got it."}, ]) # Unified search — all 3 memory types results = m.search_all("database issues") # → {semantic: [...], episodic: [...], procedural: [...]} # Cognitive Profile — instant personalization profile = m.get_profile() # → ready system prompt for any LLM # Multi-user isolation — one API key, many users m.add([...], user_id="alice") m.search_all("prefs", user_id="alice") # only Alice's data
npm install mengram-ai
const { MengramClient } = require('mengram-ai'); const m = new MengramClient('om-...'); // Save — auto-extracts facts, events, workflows await m.add([ { role: 'user', content: 'Fixed OOM with Redis cache' }, { role: 'assistant', content: 'Got it.' }, ]); // Unified search — all 3 memory types const all = await m.searchAll('database issues'); // → {semantic: [...], episodic: [...], procedural: [...]} // Multi-user isolation — one API key, many users await m.add([...], { userId: 'alice' }); await m.searchAll('prefs', { userId: 'alice' }); // only Alice's data
pip install langchain-mengram
Drop-in replacement — returns relevant knowledge from all 3 memory types instead of raw messages.
from langchain_mengram import MengramRetriever retriever = MengramRetriever( api_key="om-...", user_id="alice", top_k=5, ) # Use in any LangChain chain docs = retriever.invoke("deployment issues") # → Documents from semantic + episodic + procedural memory # Or in an LCEL chain chain = ( {"context": retriever, "question": RunnablePassthrough()} | prompt | llm | StrOutputParser() )
from langchain_mengram import MengramChatMessageHistory from langchain_core.runnables.history import RunnableWithMessageHistory chain_with_memory = RunnableWithMessageHistory( chain, lambda sid: MengramChatMessageHistory( api_key="om-...", session_id=sid, ), input_messages_key="input", history_messages_key="history", )
pip install mengram-ai crewai
Pass memory=MengramMemory() to any Crew. Agents get recall + remember tools automatically. Mengram handles extraction, search, and procedural learning server-side.
from crewai import Agent, Crew, Task from integrations.crewai_memory import MengramMemory agent = Agent( role="DevOps Engineer", goal="Deploy and monitor services", ) task = Task( description="Deploy v2.15 to staging", agent=agent, ) # One line adds persistent memory to your entire crew crew = Crew( agents=[agent], tasks=[task], memory=MengramMemory(api_key="om-..."), ) crew.kickoff() # → Agent recalls past deployments, learns from failures
openclaw plugins install openclaw-mengram
v2.2 — Auto-recall before every turn (cacheable profile), auto-capture after every turn. 12 tools, slash commands, CLI. Backward compatible with older OpenClaw.
{
"plugins": {
"entries": {
"openclaw-mengram": {
"enabled": true,
"config": {
"apiKey": "${MENGRAM_API_KEY}"
}
}
},
"slots": { "memory": "openclaw-mengram" }
}
}
// Auto-recall: memories injected before every agent turn
// Auto-capture: new info saved after every turn
Download the ready-made workflow and import it into n8n via Workflows → Import from File.
Create a Header Auth credential in n8n with your Mengram API key.
Header Name: Authorization Header Value: Bearer om-your-api-key
The workflow adds 3 HTTP nodes to any AI agent: search memories before, respond with context, save after. Works with OpenAI, Anthropic, Ollama — any LLM.
// Node 1: Search memories POST /v1/search → {"query": user_message, "user_id": "user-123"} // Node 2: AI Agent responds with full context System prompt includes retrieved memories // Node 3: Save new memories POST /v1/add → {"messages": [...], "user_id": "user-123"} // Auto-extracts facts, deduplicates, builds knowledge graph
Start free. Upgrade when you need more.
No credit card. Start in 30 seconds.
Sign up with GitHubor with email:
We'll send a verification code to confirm your email.
Ready to build with memory?
Get API Key