AI / LLM Observability

AI / LLM Observability

The @databuddy/ai package provides LLM observability for your AI applications. Track token usage, costs, latency, tool calls, and full message history across all major AI providers.

Supported Integrations

Installation

bash
bun add @databuddy/ai

Install the provider SDK you're using:

bash
# For Vercel AI SDK
bun add ai @ai-sdk/openai

# For OpenAI SDK directly
bun add openai

# For Anthropic SDK directly
bun add @anthropic-ai/sdk

Quick Start

tsx
import { createTracker } from "@databuddy/ai/vercel";
import { openai } from "@ai-sdk/openai";
import { generateText } from "ai";

const { track } = createTracker({
apiKey: process.env.DATABUDDY_API_KEY
});

const result = await generateText({
model: track(openai("gpt-4o")),
prompt: "Explain quantum computing"
});

OpenAI SDK

tsx
import { OpenAI } from "@databuddy/ai/openai";

const client = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
databuddy: {
  apiKey: process.env.DATABUDDY_API_KEY
}
});

const response = await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Hello!" }]
});

Anthropic SDK

tsx
import { Anthropic } from "@databuddy/ai/anthropic";

const client = new Anthropic({
apiKey: process.env.ANTHROPIC_API_KEY,
databuddy: {
  apiKey: process.env.DATABUDDY_API_KEY
}
});

const response = await client.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 1024,
messages: [{ role: "user", content: "Hello!" }]
});

What Gets Tracked

Every LLM call automatically captures:

Token Usage

json
{
"inputTokens": 150,
"outputTokens": 500,
"totalTokens": 650,
"cachedInputTokens": 50,
"reasoningTokens": 100,
"webSearchCount": 2
}

Cost Breakdown

Costs are computed automatically using TokenLens pricing data:

json
{
"inputCostUSD": 0.00075,
"outputCostUSD": 0.0025,
"totalCostUSD": 0.00325
}

Tool Calls

json
{
"callCount": 2,
"resultCount": 2,
"calledTools": ["get_weather", "search_web"],
"availableTools": ["get_weather", "search_web", "calculate"]
}

Metadata

json
{
"timestamp": "2024-01-15T10:30:00.000Z",
"traceId": "trace_abc123",
"type": "generate",
"model": "gpt-4o",
"provider": "openai",
"finishReason": "stop",
"durationMs": 1250,
"httpStatus": 200
}

Input/Output Content

Unless privacy mode is enabled:

json
{
"input": [
  { "role": "user", "content": "Explain quantum computing" }
],
"output": [
  { "role": "assistant", "content": "Quantum computing uses..." }
]
}

Configuration Options

All integrations share common configuration options:

OptionTypeDefaultDescription
apiKeystringDATABUDDY_API_KEY env varAPI key for authentication
apiUrlstringhttps://basket.databuddy.cc/llmCustom API endpoint
transportTransportHTTP transportCustom transport function
computeCostsbooleantrueCompute token costs using TokenLens
privacyModebooleanfalseDon't capture input/output content
onSuccess(call) => void-Callback on successful calls
onError(call) => void-Callback on failed calls

Privacy Mode

Enable privacy mode to track usage without capturing message content:

tsx
const { track } = createTracker({
apiKey: process.env.DATABUDDY_API_KEY,
privacyMode: true  // Don't capture prompts/responses
});

// Only usage, costs, and metadata are tracked
// input: [] and output: [] in the logged data

Trace IDs

Link related calls together using trace IDs:

tsx
import { createTracker, createTraceId } from "@databuddy/ai/vercel";

const { track } = createTracker({
apiKey: process.env.DATABUDDY_API_KEY
});

// Generate a trace ID for a conversation
const traceId = createTraceId();

// All calls in this conversation share the trace ID
const result1 = await generateText({
model: track(openai("gpt-4o"), { traceId }),
prompt: "What is 2+2?"
});

const result2 = await generateText({
model: track(openai("gpt-4o"), { traceId }),
prompt: "And what is that times 3?"
});

Environment Variables

bash
# Required: API key for authentication
DATABUDDY_API_KEY=your-api-key

# Optional: Custom API endpoint
DATABUDDY_API_URL=https://basket.databuddy.cc/llm

How is this guide?