Python SDK Reference
pip install agentbay
20 LLM providers, local mode, brain.chat(), teams, projects, sync.
AgentBay()
ab = AgentBay(api_key=None, project_id=None, base_url=None, local=None, local_path=None)
Create an AgentBay client. No arguments = local mode (SQLite). Pass api_key for cloud mode.
Parameters:
| api_key? | str | API key (ab_live_...). Falls back to AGENTBAY_API_KEY env var. |
| project_id? | str | Project to scope memory operations. |
| base_url? | str | API base URL. Default: https://www.aiagentsbay.com |
| local? | bool | Force local mode even with api_key set. |
| local_path? | str | Path for SQLite database. Default: ~/.agentbay/local.db |
Example:
# Local mode ab = AgentBay() # Cloud mode ab = AgentBay(api_key="ab_live_...", project_id="proj-123") # From environment import os os.environ["AGENTBAY_API_KEY"] = "ab_live_..." ab = AgentBay(project_id="proj-123")
memory.store()
result = ab.memory.store(title, content, type, tags=[], tier='semantic', ...)
Store a new memory entry. Auto-deduplicates by title + type.
Parameters:
| title | str | Short title for the entry. |
| content | str | Full content of the memory. |
| type | str | PATTERN, PITFALL, ARCHITECTURE, DEPENDENCY, DECISION, PERFORMANCE, CONTEXT, TEST_INSIGHT |
| tags? | list[str] | Tags for categorical search. |
| tier? | str | working (24h TTL), episodic, semantic, procedural |
| file_paths? | list[str] | Related file paths for code context. |
| aliases? | list[str] | Search phrases that should map to this entry. |
Returns: StoreResult { id, deduplicated, conflict_ids, poison_blocked, token_count }
memory.recall()
result = ab.memory.recall(query, limit=5, rerank=False, expand_query=True, ...)
Recall memories matching a query. Uses 4-strategy RRF fusion.
Parameters:
| query | str | Search query. |
| limit? | int | Max results. Default: 5 |
| rerank? | bool | Enable cross-encoder reranking (requires Voyage API key). |
| expand_query? | bool | Expand with synonyms. Default: True |
| graph_hops? | int | Traverse entity graph N hops. Default: 0 |
| tier? | str|list | Filter by memory tier. |
| type? | str | Filter by knowledge type. |
| tags? | list[str] | Filter by tags. |
| fast? | bool | Skip vector search for speed. |
Returns: RecallResult { entries: [MemoryEntry], total_tokens, strategies, query_type }
memory.verify()
ab.memory.verify(entry_id)
Verify a memory entry — resets confidence decay. Call when an entry was helpful.
Parameters:
| entry_id | str | ID of the entry to verify. |
memory.forget()
ab.memory.forget(entry_id)
Soft-delete a memory entry. Recoverable via time machine.
Parameters:
| entry_id | str | ID of the entry to forget. |
memory.health()
stats = ab.memory.health()
Get memory health statistics: entry counts, tier breakdown, avg confidence.
Returns: HealthResult { total_entries, by_tier, by_type, avg_confidence, stale_count }
memory.compact()
result = ab.memory.compact(dry_run=False)
Compact memory: archive stale entries, merge duplicates, expire TTLs.
Returns: CompactResult { archived, merged, ttl_expired, tokens_saved }
brain.chat()
reply = ab.brain.chat(message, provider='openai', model='gpt-4o', provider_api_key='...', ...)
Send a message with auto-memory. Recalls relevant memories, injects into context, calls LLM, stores the exchange.
Parameters:
| message | str | User message. |
| provider? | str | LLM provider: openai, anthropic, or custom. |
| model? | str | Model name. |
| provider_api_key | str | API key for the LLM provider. |
| auto_recall? | bool | Auto-recall before LLM call. Default: True |
| auto_store? | bool | Auto-store exchange after. Default: True |
| recall_limit? | int | Max memories to inject. Default: 3 |
| system_prompt? | str | Custom system prompt. |
Returns: ChatResult { message, memories_used, memories_stored, provider, model, tokens_used }
brain.chat() supports 20 LLM providers out of the box. Auto-detected from model name: