The problem with stateless agents

Most AI agents are stateless. They complete a task, the session ends, and everything is gone. Next run, the agent starts from zero — making the same mistakes, trying the same failed approaches, with no memory of what worked before.

This is fine for one-shot tasks. But for autonomous agents that run repeatedly — applying to jobs, monitoring systems, processing data, handling support tickets — it's a fundamental limitation. These agents need to learn.

The memory loop pattern

The solution is a three-step loop that runs on every agent cycle:

┌─────────────────────────────────────────────┐
│  1. RECALL — search memory before acting    │
│  2. ACT — complete the task                 │
│  3. REMEMBER — store what happened          │
└─────────────────────────────────────────────┘

Over time, the agent accumulates experience. Each run builds on the last. Here's how to implement it:

Step 1: Recall before acting

Before your agent starts a task, search memory for relevant context:

from mengram import Mengram

m = Mengram(api_key="om-...")

# Before the agent acts, recall relevant experience
context = m.search_all("submit application on Greenhouse")

# context now contains:
# - Facts: "Greenhouse uses React Select for dropdowns"
# - Episodes: "Application to Acme Corp failed — dropdown selector broke"
# - Procedures: "Greenhouse apply v3: use aria-label selector instead"

The agent now knows what worked before, what failed, and what strategy to use — without any manual prompting.

Step 2: Act with context

Pass the recalled context to your agent's LLM as part of the system prompt or tool results. The agent uses this experience to make better decisions:

# Inject memory into agent's context
system_prompt = (
    "You are an autonomous agent.
"
    "Here is what you know from past runs:
"
    f"{{context}}
"
    "Use this to avoid repeating past mistakes."
)

# Your agent acts with full context of past experience
response = llm.chat(system_prompt, task_description)

Step 3: Remember the outcome

After the agent completes (or fails) the task, store what happened:

# Store the outcome — Mengram auto-extracts facts, episodes, and procedures
m.add([
    {{"role": "user", "content": "Apply to Acme Corp on Greenhouse"}},
    {{"role": "assistant", "content": "Applied successfully. Used aria-label selector for dropdowns. Uploaded resume via base64 file input."}},
])

One add() call extracts all three memory types automatically — no manual tagging needed.

The key: procedures that evolve

The most powerful part of this pattern is procedural memory. When an agent follows a workflow and it fails, the procedure auto-evolves:

# Agent tries a procedure and it fails
m.procedure_feedback(proc_id, success=False,
                     context="Dropdown selector broke on Greenhouse")

# Mengram evolves the procedure:
# v1: fill form → submit                           ← FAILED
# v2: fill form → use aria-label selector → submit  ← SUCCESS

Next time the agent encounters the same task, search_all() returns the evolved v2 procedure. The agent improves without any human intervention.

This also happens automatically — just add conversations that mention failures, and Mengram detects the pattern:

m.add([{{"role": "user", "content": "Greenhouse apply failed — dropdown hack stopped working. Switched to aria-label and it worked."}}])
# → Episode created → linked to existing procedure → auto-evolved to v2

Real-world example: autonomous job application agent

One of our users built an agent that applies to jobs autonomously. The agent:

  1. Discovers job postings matching criteria
  2. Scores them against preferences (role, salary, remote)
  3. Tailors the resume for each position
  4. Submits applications through ATS platforms (Greenhouse, Lever)
  5. Runs 24/7 via cron

Without memory, the agent would forget which companies it already applied to, which form-filling strategies work for which platforms, and what workarounds exist for anti-bot measures.

With Mengram, each run makes the agent smarter. After 50+ applications, it has a library of evolved procedures for different ATS platforms, a history of every outcome, and facts about the user's preferences — all searchable in milliseconds.

The complete agent loop

from mengram import Mengram

m = Mengram(api_key="om-...")

def agent_loop(task: str, user_id: str = "default"):
    # 1. Recall
    context = m.search_all(task, user_id=user_id)

    # 2. Act (your agent logic here)
    result = your_agent.run(task, context=context)

    # 3. Remember
    m.add([
        {{"role": "user", "content": task}},
        {{"role": "assistant", "content": result}},
    ], user_id=user_id)

    return result

# Run on a schedule — each run builds on the last
while True:
    agent_loop("Check for new jobs and apply to top matches")
    time.sleep(3600)  # every hour

Works with any framework

This pattern works with any agent framework:

Get started

pip install mengram-ai

Get a free API key at mengram.io. The recall → act → remember loop takes 10 minutes to set up and your agent starts learning from its first run.