TypeScript SDK Reference
npm install agentbay
Full TypeScript types, async/await, local mode with better-sqlite3, brain.chat() with OpenAI/Anthropic.
new AgentBay(config?)
const ab = new AgentBay({
apiKey?: string, // ab_live_... (env: AGENTBAY_API_KEY)
projectId?: string, // project to scope operations
baseUrl?: string, // default: https://www.aiagentsbay.com
local?: boolean, // force local mode
localPath?: string, // default: ~/.agentbay/local.db
timeout?: number, // request timeout ms (default: 30000)
maxRetries?: number, // retry attempts (default: 3)
});Create an AgentBay client. No arguments = auto-detect local mode (no API key found).
Example:
// Local mode (zero signup)
const ab = new AgentBay();
// Cloud mode
const ab = new AgentBay({ apiKey: 'ab_live_...', projectId: 'my-project' });
// From saved credentials
const ab = AgentBay.fromSaved();ab.brain(defaults?)
const brain = ab.brain({ model: 'gpt-4o', providerApiKey: '...' })Create a Brain instance for auto-memory chat. Defaults apply to every chat() call.
Returns: Brain
ab.local
const isLocal: boolean = ab.local
True if running in local mode (SQLite), false if cloud.
AgentBay.fromSaved()
const ab = AgentBay.fromSaved(overrides?)
Load saved credentials from ~/.agentbay/credentials.json.
memory.recall(query, options?)
const result: RecallResult = await ab.memory.recall('auth', { limit: 5 })Recall memories matching a query. Uses RRF fusion in cloud, FTS5+tags locally.
Parameters:
| query | string | Search query. |
| limit? | number | Max results. Default: 5 |
| rerank? | boolean | Cross-encoder reranking (cloud only). |
| expandQuery? | boolean | Synonym expansion. Default: true |
| graphHops? | number | Entity graph traversal depth. |
| fast? | boolean | Skip vector search. |
| tier? | MemoryTier | MemoryTier[] | Filter by tier. |
| type? | string | Filter by knowledge type. |
| tags? | string[] | Filter by tags. |
| scope? | 'project' | 'team' | 'all' | Search scope. |
| resolution? | 'titles' | 'summaries' | 'full' | Detail level. |
Returns: RecallResult { entries: MemoryEntry[], totalTokens, strategies, queryType, adaptiveWeightsUsed }
memory.store(options)
const result: StoreResult = await ab.memory.store({ title, content, type })Store a new memory entry. Auto-deduplicates by title + type + project.
Parameters:
| title | string | Short title. |
| content | string | Full content. |
| type | KnowledgeType | string | Entry type. |
| tier? | MemoryTier | working, episodic, semantic, procedural |
| tags? | string[] | Tags for categorical search. |
| filePaths? | string[] | Related file paths. |
| aliases? | string[] | Search aliases. |
| confidence? | number | 0-1 initial confidence. |
| source? | string | human, agent, auto. |
Returns: StoreResult { id, deduplicated, conflictIds, poisonBlocked, tokenCount }
memory.verify(entryId)
await ab.memory.verify('entry-123')Verify an entry — resets confidence decay.
memory.forget(entryId)
await ab.memory.forget('entry-123')Soft-delete an entry.
memory.health()
const stats: HealthResult = await ab.memory.health()
Get memory health statistics.
Returns: HealthResult { totalEntries, byTier, byType, avgConfidence }
memory.compact()
const result: CompactResult = await ab.memory.compact()
Compact: archive stale, merge duplicates, expire TTLs.
Returns: CompactResult { archived, merged, ttlExpired, tokensSaved }
memory.link(sourceId, targetId, type, strength?)
await ab.memory.link('id-1', 'id-2', 'DEPENDS_ON', 0.8)Link two entries with a typed relationship (cloud only).
Parameters:
| sourceId | string | Source entry ID. |
| targetId | string | Target entry ID. |
| type | RelationType | RELATES_TO, CONTRADICTS, DEPENDS_ON, CAUSED_BY, SUPERSEDES |
| strength? | number | 0-1. Default: 1.0 |
memory.graph(rootId?, depth?)
const graph: GraphData = await ab.memory.graph('entry-1', 2)Get the knowledge graph (nodes + edges).
Returns: GraphData { nodes: GraphNode[], edges: GraphRelation[] }
brain.chat(message, options?)
const reply: ChatResult = await brain.chat('How does auth work?', {
provider: 'openai',
model: 'gpt-4o',
providerApiKey: process.env.OPENAI_API_KEY,
})Auto-memory chat: recall → inject → LLM → store. Maintains conversation history (last 20 messages).
Parameters:
| message | string | User message. |
| provider? | string | openai, anthropic, or custom. Auto-detected from model. |
| model? | string | Model name. Default: gpt-4o |
| providerApiKey | string | LLM provider API key. |
| autoRecall? | boolean | Recall before LLM. Default: true |
| autoStore? | boolean | Store exchange after. Default: true |
| recallLimit? | number | Max memories to inject. Default: 3 |
| systemPrompt? | string | Custom system prompt. |
Returns: ChatResult { message, memoriesUsed, memoriesStored, provider, model, tokensUsed }
brain.clearHistory()
brain.clearHistory()
Clear the conversation history.
brain.getHistory()
const messages: ChatMessage[] = brain.getHistory()
Get current conversation history.
import type {
AgentBayConfig,
MemoryTier, // 'working' | 'episodic' | 'semantic' | 'procedural'
KnowledgeType, // 'PATTERN' | 'PITFALL' | 'ARCHITECTURE' | ...
MemoryEntry, // { id, title, content, type, tier, tags, confidence, score }
RecallOptions,
RecallResult,
StoreOptions,
StoreResult,
HealthResult,
CompactResult,
RelationType, // 'RELATES_TO' | 'CONTRADICTS' | 'DEPENDS_ON' | ...
GraphRelation,
GraphData,
ChatMessage, // { role, content }
ChatOptions,
ChatResult,
Team,
Project,
} from 'agentbay';
// Error classes
import {
AgentBayError, // Base: { message, status, code }
AuthError, // 401
RateLimitError, // 429, { retryAfter }
NotFoundError, // 404
} from 'agentbay';import { AgentBayError, AuthError, RateLimitError } from 'agentbay';
try {
await ab.memory.recall('test');
} catch (e) {
if (e instanceof AuthError) {
console.log('Invalid API key');
} else if (e instanceof RateLimitError) {
console.log(`Rate limited. Retry after ${e.retryAfter}s`);
} else if (e instanceof AgentBayError) {
console.log(`API error ${e.status}: ${e.message}`);
}
}