TypeScript SDK v1.0.0

TypeScript SDK Reference

npm install agentbay

Full TypeScript types, async/await, local mode with better-sqlite3, brain.chat() with OpenAI/Anthropic.

AgentBay

new AgentBay(config?)

const ab = new AgentBay({
  apiKey?: string,      // ab_live_... (env: AGENTBAY_API_KEY)
  projectId?: string,   // project to scope operations
  baseUrl?: string,     // default: https://www.aiagentsbay.com
  local?: boolean,      // force local mode
  localPath?: string,   // default: ~/.agentbay/local.db
  timeout?: number,     // request timeout ms (default: 30000)
  maxRetries?: number,  // retry attempts (default: 3)
});

Create an AgentBay client. No arguments = auto-detect local mode (no API key found).

Example:

// Local mode (zero signup)
const ab = new AgentBay();

// Cloud mode
const ab = new AgentBay({ apiKey: 'ab_live_...', projectId: 'my-project' });

// From saved credentials
const ab = AgentBay.fromSaved();

ab.brain(defaults?)

const brain = ab.brain({ model: 'gpt-4o', providerApiKey: '...' })

Create a Brain instance for auto-memory chat. Defaults apply to every chat() call.

Returns: Brain

ab.local

const isLocal: boolean = ab.local

True if running in local mode (SQLite), false if cloud.

AgentBay.fromSaved()

const ab = AgentBay.fromSaved(overrides?)

Load saved credentials from ~/.agentbay/credentials.json.

ab.memory

memory.recall(query, options?)

const result: RecallResult = await ab.memory.recall('auth', { limit: 5 })

Recall memories matching a query. Uses RRF fusion in cloud, FTS5+tags locally.

Parameters:

querystringSearch query.
limit?numberMax results. Default: 5
rerank?booleanCross-encoder reranking (cloud only).
expandQuery?booleanSynonym expansion. Default: true
graphHops?numberEntity graph traversal depth.
fast?booleanSkip vector search.
tier?MemoryTier | MemoryTier[]Filter by tier.
type?stringFilter by knowledge type.
tags?string[]Filter by tags.
scope?'project' | 'team' | 'all'Search scope.
resolution?'titles' | 'summaries' | 'full'Detail level.

Returns: RecallResult { entries: MemoryEntry[], totalTokens, strategies, queryType, adaptiveWeightsUsed }

memory.store(options)

const result: StoreResult = await ab.memory.store({ title, content, type })

Store a new memory entry. Auto-deduplicates by title + type + project.

Parameters:

titlestringShort title.
contentstringFull content.
typeKnowledgeType | stringEntry type.
tier?MemoryTierworking, episodic, semantic, procedural
tags?string[]Tags for categorical search.
filePaths?string[]Related file paths.
aliases?string[]Search aliases.
confidence?number0-1 initial confidence.
source?stringhuman, agent, auto.

Returns: StoreResult { id, deduplicated, conflictIds, poisonBlocked, tokenCount }

memory.verify(entryId)

await ab.memory.verify('entry-123')

Verify an entry — resets confidence decay.

memory.forget(entryId)

await ab.memory.forget('entry-123')

Soft-delete an entry.

memory.health()

const stats: HealthResult = await ab.memory.health()

Get memory health statistics.

Returns: HealthResult { totalEntries, byTier, byType, avgConfidence }

memory.compact()

const result: CompactResult = await ab.memory.compact()

Compact: archive stale, merge duplicates, expire TTLs.

Returns: CompactResult { archived, merged, ttlExpired, tokensSaved }

memory.link(sourceId, targetId, type, strength?)

await ab.memory.link('id-1', 'id-2', 'DEPENDS_ON', 0.8)

Link two entries with a typed relationship (cloud only).

Parameters:

sourceIdstringSource entry ID.
targetIdstringTarget entry ID.
typeRelationTypeRELATES_TO, CONTRADICTS, DEPENDS_ON, CAUSED_BY, SUPERSEDES
strength?number0-1. Default: 1.0

memory.graph(rootId?, depth?)

const graph: GraphData = await ab.memory.graph('entry-1', 2)

Get the knowledge graph (nodes + edges).

Returns: GraphData { nodes: GraphNode[], edges: GraphRelation[] }

Brain (ab.brain())

brain.chat(message, options?)

const reply: ChatResult = await brain.chat('How does auth work?', {
  provider: 'openai',
  model: 'gpt-4o',
  providerApiKey: process.env.OPENAI_API_KEY,
})

Auto-memory chat: recall → inject → LLM → store. Maintains conversation history (last 20 messages).

Parameters:

messagestringUser message.
provider?stringopenai, anthropic, or custom. Auto-detected from model.
model?stringModel name. Default: gpt-4o
providerApiKeystringLLM provider API key.
autoRecall?booleanRecall before LLM. Default: true
autoStore?booleanStore exchange after. Default: true
recallLimit?numberMax memories to inject. Default: 3
systemPrompt?stringCustom system prompt.

Returns: ChatResult { message, memoriesUsed, memoriesStored, provider, model, tokensUsed }

brain.clearHistory()

brain.clearHistory()

Clear the conversation history.

brain.getHistory()

const messages: ChatMessage[] = brain.getHistory()

Get current conversation history.

Type Exports
import type {
  AgentBayConfig,
  MemoryTier,       // 'working' | 'episodic' | 'semantic' | 'procedural'
  KnowledgeType,    // 'PATTERN' | 'PITFALL' | 'ARCHITECTURE' | ...
  MemoryEntry,      // { id, title, content, type, tier, tags, confidence, score }
  RecallOptions,
  RecallResult,
  StoreOptions,
  StoreResult,
  HealthResult,
  CompactResult,
  RelationType,     // 'RELATES_TO' | 'CONTRADICTS' | 'DEPENDS_ON' | ...
  GraphRelation,
  GraphData,
  ChatMessage,      // { role, content }
  ChatOptions,
  ChatResult,
  Team,
  Project,
} from 'agentbay';

// Error classes
import {
  AgentBayError,    // Base: { message, status, code }
  AuthError,        // 401
  RateLimitError,   // 429, { retryAfter }
  NotFoundError,    // 404
} from 'agentbay';
Error Handling
import { AgentBayError, AuthError, RateLimitError } from 'agentbay';

try {
  await ab.memory.recall('test');
} catch (e) {
  if (e instanceof AuthError) {
    console.log('Invalid API key');
  } else if (e instanceof RateLimitError) {
    console.log(`Rate limited. Retry after ${e.retryAfter}s`);
  } else if (e instanceof AgentBayError) {
    console.log(`API error ${e.status}: ${e.message}`);
  }
}