AI / LLM Observability

Vercel AI SDK

The Vercel AI SDK integration works with any AI provider through the unified Vercel AI SDK interface. Wrap your model with track() to automatically capture all LLM calls.

Installation

bash
bun add @databuddy/ai ai @ai-sdk/openai

Quick Start

tsx
import { createTracker } from "@databuddy/ai/vercel";
import { openai } from "@ai-sdk/openai";
import { generateText } from "ai";

const { track } = createTracker({
apiKey: process.env.DATABUDDY_API_KEY
});

const result = await generateText({
model: track(openai("gpt-4o")),
prompt: "Explain quantum computing in simple terms"
});

Configuration

createTracker(options)

Create a tracking instance with your configuration:

tsx
import { createTracker } from "@databuddy/ai/vercel";

const { track, transport } = createTracker({
// Required: API key for authentication
apiKey: process.env.DATABUDDY_API_KEY,

// Optional: Custom API endpoint
apiUrl: "https://basket.databuddy.cc/llm",

// Optional: Compute token costs (default: true)
computeCosts: true,

// Optional: Don't capture message content (default: false)
privacyMode: false,

// Optional: Max content size in bytes (default: 1MB)
maxContentSize: 1_048_576,

// Optional: Success callback
onSuccess: (call) => console.log("AI call completed:", call.traceId),

// Optional: Error callback
onError: (call) => console.error("AI call failed:", call.error)
});

Options Reference

OptionTypeDefaultDescription
apiKeystringDATABUDDY_API_KEY env varRequired. API key for authentication
apiUrlstringhttps://basket.databuddy.cc/llmAPI endpoint (or DATABUDDY_API_URL env var)
transportTransportHTTP transportCustom transport function
computeCostsbooleantrueCompute token costs using TokenLens
privacyModebooleanfalseDon't capture input/output content
maxContentSizenumber1048576 (1MB)Max content size in bytes
onSuccess(call) => void-Callback on successful calls
onError(call) => void-Callback on failed calls

Tracking Models

Basic Usage

tsx
import { createTracker } from "@databuddy/ai/vercel";
import { openai } from "@ai-sdk/openai";
import { anthropic } from "@ai-sdk/anthropic";
import { generateText, streamText } from "ai";

const { track } = createTracker({
apiKey: process.env.DATABUDDY_API_KEY
});

// Track OpenAI
const result1 = await generateText({
model: track(openai("gpt-4o")),
prompt: "Hello!"
});

// Track Anthropic
const result2 = await generateText({
model: track(anthropic("claude-sonnet-4-20250514")),
prompt: "Hello!"
});

// Streaming works too
const stream = await streamText({
model: track(openai("gpt-4o-mini")),
prompt: "Write a poem"
});

Per-Call Options

Override options for specific calls:

tsx
const { track } = createTracker({
apiKey: process.env.DATABUDDY_API_KEY
});

// Enable privacy mode for sensitive calls
const result = await generateText({
model: track(openai("gpt-4o"), {
  privacyMode: true,
  traceId: "custom-trace-123"
}),
prompt: "Process this sensitive data..."
});

Track Options

OptionTypeDescription
traceIdstringCustom trace ID to link related calls
transportTransportOverride transport for this call
computeCostsbooleanOverride cost computation
privacyModebooleanOverride privacy mode
onSuccess(call) => voidOverride success callback
onError(call) => voidOverride error callback

Tool Tracking

Tools are automatically tracked when used:

tsx
import { createTracker } from "@databuddy/ai/vercel";
import { openai } from "@ai-sdk/openai";
import { generateText, tool } from "ai";
import { z } from "zod";

const { track } = createTracker({
apiKey: process.env.DATABUDDY_API_KEY
});

const result = await generateText({
model: track(openai("gpt-4o")),
prompt: "What's the weather in London?",
tools: {
  getWeather: tool({
    description: "Get current weather",
    parameters: z.object({
      city: z.string()
    }),
    execute: async ({ city }) => {
      return { temperature: 15, condition: "cloudy" };
    }
  })
}
});

// Tracked data includes:
// tools: {
//   callCount: 1,
//   resultCount: 1,
//   calledTools: ["getWeather"],
//   availableTools: ["getWeather"]
// }

Streaming

Streaming responses are tracked when the stream completes:

tsx
import { createTracker } from "@databuddy/ai/vercel";
import { openai } from "@ai-sdk/openai";
import { streamText } from "ai";

const { track } = createTracker({
apiKey: process.env.DATABUDDY_API_KEY
});

const result = await streamText({
model: track(openai("gpt-4o")),
prompt: "Write a story"
});

// Stream the response
for await (const chunk of result.textStream) {
process.stdout.write(chunk);
}

// Usage data is tracked when stream completes

Error Tracking

Errors are automatically captured:

tsx
const { track } = createTracker({
apiKey: process.env.DATABUDDY_API_KEY,
onError: (call) => {
  console.error("AI call failed:", {
    model: call.model,
    error: call.error?.message,
    durationMs: call.durationMs
  });
}
});

try {
await generateText({
  model: track(openai("gpt-4o")),
  prompt: "..."
});
} catch (error) {
// Error is logged automatically, plus your onError callback runs
}

Privacy Mode

Enable privacy mode to track usage without capturing content:

tsx
const { track } = createTracker({
apiKey: process.env.DATABUDDY_API_KEY,
privacyMode: true  // Don't capture prompts/responses
});

// Only usage, costs, and metadata are tracked
// input: [] and output: [] in the logged data

Custom Transport

Use a custom transport for logging to other destinations:

tsx
import { createTracker, httpTransport, type LLMCall } from "@databuddy/ai/vercel";

// Custom transport that logs locally and sends to API
const customTransport = async (call: LLMCall) => {
console.log("AI call:", {
  model: call.model,
  tokens: call.usage.totalTokens,
  cost: call.cost.totalCostUSD
});

// Also send to Databuddy
await httpTransport("https://basket.databuddy.cc/llm", "your-api-key")(call);
};

const { track } = createTracker({
transport: customTransport
});

Trace IDs

Link related calls with trace IDs:

tsx
import { createTracker, createTraceId } from "@databuddy/ai/vercel";

const { track } = createTracker({
apiKey: process.env.DATABUDDY_API_KEY
});

// Generate a trace ID for a conversation
const traceId = createTraceId();

// All calls in this conversation share the trace ID
const result1 = await generateText({
model: track(openai("gpt-4o"), { traceId }),
prompt: "What is 2+2?"
});

const result2 = await generateText({
model: track(openai("gpt-4o"), { traceId }),
prompt: "And what is that times 3?"
});

Supported Providers

The SDK works with any Vercel AI SDK provider:

ProviderPackageExample
OpenAI@ai-sdk/openaiopenai("gpt-4o")
Anthropic@ai-sdk/anthropicanthropic("claude-sonnet-4-20250514")
Google@ai-sdk/googlegoogle("gemini-1.5-pro")
Mistral@ai-sdk/mistralmistral("mistral-large")
Cohere@ai-sdk/coherecohere("command-r-plus")
AWS Bedrock@ai-sdk/amazon-bedrockbedrock("anthropic.claude-3")
Azure@ai-sdk/azureazure("gpt-4")
Groq@ai-sdk/groqgroq("mixtral-8x7b")

TypeScript Types

tsx
import {
createTracker,
httpTransport,
createTraceId,
type CallOptions,
type Cost,
type ErrorInfo,
type LLMCall,
type Message,
type TrackerOptions,
type Transport,
type Usage
} from "@databuddy/ai/vercel";

How is this guide?