Quickstart

Get your API key and add your first memory in under 2 minutes.

1. Get an API key

Sign up at mengram.io to get your free API key. It starts with om-.

2. Install the SDK

Python

pip install mengram-ai

JavaScript

npm install mengram-ai

3. Add your first memory

Python

from mengram import Mengram

m = Mengram(api_key="om-your-key")

# Add memories from a conversation
result = m.add([
    {{"role": "user", "content": "I deployed the app on Railway. Using PostgreSQL."}},
    {{"role": "assistant", "content": "Got it, noted the Railway + PostgreSQL stack."}},
])

# result contains a job_id for background processing
print(result)  # {{"status": "accepted", "job_id": "job-..."}}

JavaScript

const {{ MengramClient }} = require('mengram-ai');
const m = new MengramClient('om-your-key');

await m.add([
    {{ role: 'user', content: 'I deployed the app on Railway. Using PostgreSQL.' }},
]);

4. Search your memories

# Semantic search
results = m.search("deployment stack")
for r in results:
    print(f"{{r['entity']}} (score={{r['score']:.2f}})")
    for fact in r.get("facts", []):
        print(f"  - {{fact}}")

# Unified search — all 3 memory types at once
all_results = m.search_all("deployment issues")
print(all_results["semantic"])    # knowledge graph results
print(all_results["episodic"])    # events and experiences
print(all_results["procedural"]) # learned workflows

5. Get a Cognitive Profile

Generate a ready-to-use system prompt that captures who a user is:

profile = m.get_profile()
system_prompt = profile["system_prompt"]

# Use in any LLM call
response = openai.chat.completions.create(
    model="gpt-4o",
    messages=[
        {{"role": "system", "content": system_prompt}},
        {{"role": "user", "content": "What should I work on next?"}},
    ]
)
Tip: Use the environment variable MENGRAM_API_KEY so you don't have to pass the key every time: m = Mengram()