Prerequisites
- Python 3.8+ or Node.js 18+
- A free Mengram API key — get one here
- Any LLM API (OpenAI, Anthropic, etc.) or a local model
Step 1: Install
Python
pip install mengram-ai
JavaScript
npm install mengram
Step 2: Initialize
Python
from mengram import Mengram
m = Mengram(api_key="mg-...") # or set MENGRAM_API_KEY env var
JavaScript
import Mengram from 'mengram';
const m = new Mengram({{ apiKey: 'mg-...' }});
Step 3: Store memories after each conversation
After your agent finishes a conversation turn, pass the exchange to Mengram. It automatically extracts all three memory types.
Python
# Store the conversation — Mengram extracts facts, events, and workflows
m.add(
"User asked how to deploy to production. I walked them through "
"the CI/CD pipeline: push to main, GitHub Actions runs tests, "
"builds Docker image, deploys to staging, then promotes to prod.",
user_id="user-123"
)
JavaScript
await m.add(
"User asked how to deploy to production. I walked them through " +
"the CI/CD pipeline: push to main, GitHub Actions runs tests, " +
"builds Docker image, deploys to staging, then promotes to prod.",
{{ userId: 'user-123' }}
);
Step 4: Search memories before responding
# Python
results = m.search("deployment process", user_id="user-123")
for r in results:
print(r.memory, r.type, r.score)
// JavaScript
const results = await m.search('deployment process', {{ userId: 'user-123' }});
results.forEach(r => console.log(r.memory, r.type, r.score));
Step 5: Use Cognitive Profile for instant personalization
Instead of searching for specific memories, generate a complete system prompt:
# Python — one API call returns a ready-to-use system prompt
profile = m.profile(user_id="user-123")
print(profile)
# "You are assisting user-123, a developer who works with CI/CD pipelines..."
# Use it with any LLM
response = openai.chat.completions.create(
model="gpt-4o",
messages=[
{{"role": "system", "content": profile}},
{{"role": "user", "content": user_message}}
]
)
Learn more about Cognitive Profile.
Full example: OpenAI agent with memory
from openai import OpenAI
from mengram import Mengram
openai = OpenAI()
m = Mengram()
def chat(user_id: str, message: str) -> str:
# Get personalized system prompt from memory
profile = m.profile(user_id=user_id)
response = openai.chat.completions.create(
model="gpt-4o",
messages=[
{{"role": "system", "content": profile}},
{{"role": "user", "content": message}}
]
)
reply = response.choices[0].message.content
# Store the exchange in memory
m.add(f"User: {{message}}\nAssistant: {{reply}}", user_id=user_id)
return reply
That's it. Your agent now remembers every conversation and gets smarter over time. Also works with CrewAI and LangChain, or as an MCP server for Claude Desktop.