AI / LLM Observability

OpenAI SDK

The OpenAI SDK integration provides a drop-in replacement for the official openai package. Use the same API you already know with automatic observability built-in.

Installation

bash
bun add @databuddy/ai openai

Quick Start

Replace your OpenAI import with the Databuddy version:

tsx
// Before
// import OpenAI from "openai";

// After
import { OpenAI } from "@databuddy/ai/openai";

const client = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
databuddy: {
  apiKey: process.env.DATABUDDY_API_KEY
}
});

const response = await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Hello!" }]
});

Configuration

Constructor Options

The OpenAI class accepts all standard OpenAI options plus databuddy configuration:

tsx
import { OpenAI } from "@databuddy/ai/openai";

const client = new OpenAI({
// Standard OpenAI options
apiKey: process.env.OPENAI_API_KEY,
organization: "org-xxx",
baseURL: "https://api.openai.com/v1",

// Databuddy options
databuddy: {
  // Required: API key for authentication
  apiKey: process.env.DATABUDDY_API_KEY,
  
  // Optional: Custom API endpoint
  apiUrl: "https://basket.databuddy.cc/llm",
  
  // Optional: Compute token costs (default: true)
  computeCosts: true,
  
  // Optional: Don't capture message content (default: false)
  privacyMode: false,
  
  // Optional: Success callback
  onSuccess: (call) => console.log("Call completed:", call.traceId),
  
  // Optional: Error callback
  onError: (call) => console.error("Call failed:", call.error)
}
});

Databuddy Options Reference

OptionTypeDefaultDescription
apiKeystringDATABUDDY_API_KEY env varRequired. API key for authentication
apiUrlstringhttps://api.databuddy.cc/llmAPI endpoint (or DATABUDDY_API_URL env var)
transportTransportHTTP transportCustom transport function
computeCostsbooleantrueCompute token costs using TokenLens
privacyModebooleanfalseDon't capture input/output content
onSuccess(call) => void-Callback on successful calls
onError(call) => void-Callback on failed calls

Basic Usage

Chat Completions

tsx
import { OpenAI } from "@databuddy/ai/openai";

const client = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
databuddy: {
  apiKey: process.env.DATABUDDY_API_KEY
}
});

const response = await client.chat.completions.create({
model: "gpt-4o",
messages: [
  { role: "system", content: "You are a helpful assistant." },
  { role: "user", content: "What is the capital of France?" }
]
});

console.log(response.choices[0].message.content);

Streaming

tsx
const stream = await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Write a poem" }],
stream: true
});

for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content;
if (content) {
  process.stdout.write(content);
}
}

// Usage is tracked when stream completes

Per-Call Options

Override Databuddy options for specific calls:

tsx
const response = await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Process sensitive data..." }],

// Per-call Databuddy options
databuddy: {
  // Custom trace ID
  traceId: "conversation-123",
  
  // Enable privacy mode for this call
  privacyMode: true,
  
  // Disable cost computation
  computeCosts: false,
  
  // Custom callbacks
  onSuccess: (call) => console.log("Done:", call.durationMs),
  onError: (call) => console.error("Failed:", call.error)
}
});

Per-Call Options Reference

OptionTypeDescription
traceIdstringCustom trace ID to link related calls
transportTransportOverride transport for this call
computeCostsbooleanOverride cost computation
privacyModebooleanOverride privacy mode
onSuccess(call) => voidOverride success callback
onError(call) => voidOverride error callback

Tool Calling

Tools are automatically tracked:

tsx
const response = await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "What's the weather in London?" }],
tools: [
  {
    type: "function",
    function: {
      name: "get_weather",
      description: "Get current weather for a city",
      parameters: {
        type: "object",
        properties: {
          city: { type: "string" }
        },
        required: ["city"]
      }
    }
  }
]
});

// Tracked data includes:
// tools: {
//   callCount: 1,
//   calledTools: ["get_weather"],
//   availableTools: ["get_weather"]
// }

Error Tracking

Errors are automatically captured:

tsx
const client = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
databuddy: {
  apiKey: process.env.DATABUDDY_API_KEY,
  onError: (call) => {
    console.error("AI call failed:", {
      model: call.model,
      error: call.error?.message,
      httpStatus: call.httpStatus,
      durationMs: call.durationMs
    });
  }
}
});

try {
await client.chat.completions.create({
  model: "gpt-4o",
  messages: [{ role: "user", content: "..." }]
});
} catch (error) {
// Error is logged automatically, plus your onError callback runs
}

Privacy Mode

Enable privacy mode to track usage without capturing message content:

tsx
const client = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
databuddy: {
  apiKey: process.env.DATABUDDY_API_KEY,
  privacyMode: true  // Don't capture prompts/responses
}
});

// Only usage, costs, and metadata are tracked
// input: [] and output: [] in the logged data

Trace IDs

Link related calls with trace IDs:

tsx
import { OpenAI, createTraceId } from "@databuddy/ai/openai";

const client = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
databuddy: { apiKey: process.env.DATABUDDY_API_KEY }
});

// Generate a trace ID for a conversation
const traceId = createTraceId();

// First message
const result1 = await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "What is 2+2?" }],
databuddy: { traceId }
});

// Follow-up with same trace ID
const result2 = await client.chat.completions.create({
model: "gpt-4o",
messages: [
  { role: "user", content: "What is 2+2?" },
  { role: "assistant", content: result1.choices[0].message.content },
  { role: "user", content: "And what is that times 3?" }
],
databuddy: { traceId }
});

Custom Transport

Use a custom transport for logging to other destinations:

tsx
import { OpenAI, httpTransport, type OpenAILLMCall } from "@databuddy/ai/openai";

// Custom transport that logs locally and sends to API
const customTransport = async (call: OpenAILLMCall) => {
console.log("AI call:", {
  model: call.model,
  tokens: call.usage.totalTokens,
  cost: call.cost.totalCostUSD
});

// Also send to Databuddy
await httpTransport("https://api.databuddy.cc/llm", "your-api-key")(call);
};

const client = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
databuddy: {
  transport: customTransport
}
});

What Gets Tracked

Each call captures:

json
{
"timestamp": "2024-01-15T10:30:00.000Z",
"traceId": "01234567-89ab-cdef-0123-456789abcdef",
"type": "generate",
"model": "gpt-4o",
"provider": "openai",
"finishReason": "stop",
"input": [
  { "role": "user", "content": [{ "type": "text", "text": "Hello!" }] }
],
"output": [
  { "role": "assistant", "content": [{ "type": "text", "text": "Hi there!" }] }
],
"usage": {
  "inputTokens": 10,
  "outputTokens": 50,
  "totalTokens": 60,
  "webSearchCount": 0
},
"cost": {
  "inputCostUSD": 0.00005,
  "outputCostUSD": 0.00075,
  "totalCostUSD": 0.0008
},
"tools": {
  "callCount": 0,
  "resultCount": 0,
  "calledTools": [],
  "availableTools": []
},
"durationMs": 850,
"httpStatus": 200
}

TypeScript Types

tsx
import {
OpenAI,
createTraceId,
httpTransport,
type OpenAICallOptions,
type OpenAILLMCall,
type OpenAITrackerOptions,
type OpenAITransport
} from "@databuddy/ai/openai";

How is this guide?