Migration Guide
Migrate from Mem0 to AgentBay
Switch in under 10 minutes. This guide maps every Mem0 API call to its AgentBay equivalent.
Step 1: Install
Replace the Mem0 package:
# Remove Mem0 pip uninstall mem0ai # Install AgentBay pip install agentbay
Step 2: Initialize
Mem0 (before)
from mem0 import Memory
m = Memory()
# or with API key:
m = Memory.from_config({
"api_key": "m0-xxx"
})AgentBay (after)
from agentbay import AgentBay
ab = AgentBay() # local mode
# or with API key:
ab = AgentBay(
api_key="ab_live_xxx",
project_id="my-project"
)AgentBay with no arguments starts in local mode (SQLite). No signup needed to get started.
Step 3: Store Memories
Mem0
m.add(
"The user prefers dark mode",
user_id="user1",
metadata={"category": "preference"}
)AgentBay
ab.memory.store(
title="User prefers dark mode",
content="The user prefers dark mode",
type="PATTERN",
tags=["preference", "ui"]
)AgentBay separates title from content for better search. The type field (PATTERN, PITFALL, DECISION, etc.) helps organize knowledge.
Step 4: Recall Memories
Mem0
results = m.search(
"dark mode",
user_id="user1"
)
for r in results:
print(r["memory"])AgentBay
results = ab.memory.recall(
"dark mode"
)
for entry in results.entries:
print(f"{entry.title}: {entry.content}")
print(f" confidence: {entry.confidence}")AgentBay recall returns confidence scores (computed at query time with decay), search strategy metadata, and token counts for context-window budgeting.
Step 5: Auto-Memory Chat
Mem0
# Mem0 doesn't have a built-in # chat wrapper. You must manually: # 1. Search for relevant memories # 2. Inject into system prompt # 3. Call your LLM # 4. Store the response
AgentBay
reply = ab.brain.chat(
"How does auth work?",
provider="openai",
model="gpt-4o",
provider_api_key="sk-..."
)
# Auto: recall → inject → LLM → store
print(reply.message)
print(f"Used {len(reply.memories_used)} memories")Full API Mapping
| Mem0 | AgentBay | Notes |
|---|---|---|
| m.add(text, user_id) | ab.memory.store(title, content, type) | title + content for better search |
| m.search(query, user_id) | ab.memory.recall(query) | Returns confidence + strategies |
| m.get_all(user_id) | ab.memory.health() | Health stats, not raw dump |
| m.delete(memory_id) | ab.memory.forget(entry_id) | Soft delete (recoverable) |
| m.update(memory_id, data) | ab.memory.store(...) | Auto-dedup by title+type |
| m.history(memory_id) | Brain time machine | Snapshots + rollback |
| (no equivalent) | ab.memory.verify(id) | Reset confidence decay |
| (no equivalent) | ab.memory.compact() | Archive stale, merge dupes |
| (no equivalent) | ab.brain.chat(msg) | Auto-memory LLM wrapper |
What You Gain by Switching
- Better search: 4-strategy RRF fusion instead of vector-only
- Confidence decay: Old memories fade, frequently-used ones stay strong
- Poison detection: 20+ patterns block prompt injection in stored memories
- Local mode: Work offline, sync to cloud when ready
- brain.chat(): Auto-memory wrapping for any LLM in one line
- Memory tiers: Working (24h) → episodic → semantic → procedural (365d)
- Multi-agent teams: Shared knowledge with role-based permissions
- 117 MCP tools: Works with Claude Code, Cursor, any MCP client
Questions about migrating?
Start with local mode to test without commitment.