Tracing integrations

OpenTelemetry (OTel)

To set up Braintrust as an OpenTelemetry backend, you'll need to route the traces to Braintrust's OpenTelemetry endpoint, set your API key, and specify a parent project or experiment.

Braintrust supports a combination of common patterns from OpenLLMetry and popular libraries like the Vercel AI SDK. Behind the scenes, clients can point to Braintrust's API as an exporter, which makes it easy to integrate without having to install additional libraries or code. OpenLLMetry supports a range of languages including Python, TypeScript, Java, and Go, so it's an easy way to start logging to Braintrust from many different environments.

Once you set up an OpenTelemetry Protocol Exporter (OTLP) to send traces to Braintrust, we automatically convert LLM calls into Braintrust LLM spans, which can be saved as prompts and evaluated in the playground.

For collectors that use the OpenTelemetry SDK to export traces, set the following environment variables:

OTEL_EXPORTER_OTLP_ENDPOINT=https://api.braintrust.dev/otel
OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer <Your API Key>, x-bt-parent=project_id:<Your Project ID>"

The trace endpoint URL is https://api.braintrust.dev/otel/v1/traces. If your exporter uses signal-specific environment variables, you'll need to set the full path: OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=https://api.braintrust.dev/otel/v1/traces

If you're self-hosting Braintrust, substitute your stack's Universal API URL. For example: OTEL_EXPORTER_OTLP_ENDPOINT=https://dfwhllz61x709.cloudfront.net/otel

The x-bt-parent header sets the trace's parent project or experiment. You can use a prefix like project_id:, project_name:, or experiment_id: here, or pass in a span slug (span.export()) to nest the trace under a span within the parent object.

Vercel AI SDK

To use the Vercel AI SDK to send telemetry data to Braintrust, set these environment variables in your Next.js app's .env file:

OTEL_EXPORTER_OTLP_ENDPOINT=https://api.braintrust.dev/otel
OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer <Your API Key>, x-bt-parent=project_id:<Your Project ID>"

You can then use the experimental_telemetry option to enable telemetry on supported AI SDK function calls:

import { createOpenAI } from "@ai-sdk/openai";
import { generateText } from "ai";
 
const openai = createOpenAI();
 
async function main() {
  const result = await generateText({
    model: openai("gpt-4o-mini"),
    prompt: "What is 2 + 2?",
    experimental_telemetry: {
      isEnabled: true,
      metadata: {
        query: "weather",
        location: "San Francisco",
      },
    },
  });
  console.log(result);
}
 
main();

Traced LLM calls will appear under the Braintrust project or experiment provided in the x-bt-parent header.

Traceloop

To export OTel traces from Traceloop OpenLLMetry to Braintrust, set the following environment variables:

TRACELOOP_BASE_URL=https://api.braintrust.dev/otel
TRACELOOP_HEADERS="Authorization=Bearer <Your API Key>, x-bt-parent=project_id:<Your Project ID>"

Traces will then appear under the Braintrust project or experiment provided in the x-bt-parent header.

from openai import OpenAI
from traceloop.sdk import Traceloop
from traceloop.sdk.decorators import workflow
 
Traceloop.init(disable_batch=True)
client = OpenAI()
 
 
@workflow(name="story")
def run_story_stream(client):
    completion = client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[{"role": "user", "content": "Tell me a short story about LLM evals."}],
    )
    return completion.choices[0].message.content
 
 
print(run_story_stream(client))

LlamaIndex

To trace LLM calls with LlamaIndex, you can use the OpenInference LlamaIndexInstrumentor to send OTel traces directly to Braintrust. Configure your environment and set the OTel endpoint:

import os
 
import llama_index.core
 
BRAINTRUST_API_URL = os.environ.get("BRAINTRUST_API_URL", "https://api.braintrust.dev")
BRAINTRUST_API_KEY = os.environ.get("BRAINTRUST_API_KEY", "<Your API Key>")
PROJECT_ID = "<Your Project ID>"
 
os.environ["OTEL_EXPORTER_OTLP_HEADERS"] = (
    f"Authorization=Bearer {BRAINTRUST_API_KEY}" + f"x-bt-parent=project_id:{PROJECT_ID}"
)
llama_index.core.set_global_handler("arize_phoenix", endpoint=f"{BRAINTRUST_API_URL}/otel/v1/traces")

Now traced LLM calls will appear under the provided Braintrust project or experiment.

from llama_index.core.llms import ChatMessage
from llama_index.llms.openai import OpenAI
 
messages = [
    ChatMessage(role="system", content="Speak like a pirate. ARRR!"),
    ChatMessage(role="user", content="What do llamas sound like?"),
]
result = OpenAI().chat(messages)
print(result)

Manual Tracing

If you want to log LLM calls directly to the OTel endpoint, you can set up a custom OpenTelemetry tracer and add the appropriate attributes to your spans. This gives you fine-grained control over what data gets logged.

Braintrust implements the OpenTelemetry GenAI semantic conventions. When you send traces with these attributes, they are automatically mapped to Braintrust fields.

AttributeBraintrust FieldDescription
gen_ai.promptinputUser message (string). If you have an array of messages, you'll need to use gen_ai.prompt_json (see below) or set flattened attributes like gen_ai.prompt.0.role or gen_ai.prompt.0.content.
gen_ai.prompt_jsoninputA JSON-serialized string containing an array of OpenAI messages.
gen_ai.completionoutputAssistant message (string). Note that if you have an array of messages, you'll need to use gen_ai.completion_json (see below) or set flattened attributes like gen_ai.completion.0.role or gen_ai.completion.0.content.
gen_ai.completion_jsonoutputA JSON-serialized string containing an array of OpenAI messages.
gen_ai.request.modelmetadata.modelThe model name (e.g. "gpt-4o")
gen_ai.request.max_tokensmetadata.max_tokensmax_tokens
gen_ai.request.temperaturemetadata.temperaturetemperature
gen_ai.request.top_pmetadata.top_ptop_p
gen_ai.usage.prompt_tokensmetrics.prompt_tokensInput tokens
gen_ai.usage.completion_tokensmetrics.completion_tokensOutput tokens

You can also use the braintrust namespace to set fields in Braintrust directly:

AttributeBraintrust FieldNotes
braintrust.inputinputTypically a single user message (string). If you have an array of messages, use braintrust.input_json instead (see below) or set flattened attributes like braintrust.input.0.role or braintrust.input.0.content.
braintrust.input_jsoninputA JSON-serialized string containing an array of OpenAI messages.
braintrust.outputoutputTypically a single assistant message (string). If you have an array of messages, use braintrust.output_json instead (see below) or set flattened attributes like braintrust.output.0.role or braintrust.output.0.content.
braintrust.output_jsonoutputA JSON-serialized string containing an array of OpenAI messages.
braintrust.metadatametadataA JSON-serialized dictionary with string keys. Alternatively, you can use flattened attribute names, like braintrust.metadata.model or braintrust.metadata.temperature.
braintrust.metricsmetricsA JSON-serialized dictionary with string keys. Alternatively, you can use flattened attribute names, like braintrust.metrics.prompt_tokens or braintrust.metrics.completion_tokens.

Here's an example of how to set up manual tracing:

import json
import os
 
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
 
BRAINTRUST_API_URL = os.environ.get("BRAINTRUST_API_URL", "https://api.braintrust.dev")
BRAINTRUST_API_KEY = os.environ.get("BRAINTRUST_API_KEY", "<Your API Key>")
PROJECT_ID = "<Your Project ID>"
 
provider = TracerProvider()
processor = BatchSpanProcessor(
    OTLPSpanExporter(
        endpoint=f"{BRAINTRUST_API_URL}/otel/v1/traces",
        headers={"Authorization": f"Bearer {BRAINTRUST_API_KEY}", "x-bt-parent": f"project_id:{PROJECT_ID}"},
    )
)
provider.add_span_processor(processor)
trace.set_tracer_provider(provider)
tracer = trace.get_tracer(__name__)
 
# Export a span with flattened attribute names.
with tracer.start_as_current_span("GenAI Attributes") as span:
    span.set_attribute("gen_ai.prompt.0.role", "system")
    span.set_attribute("gen_ai.prompt.0.content", "You are a helpful assistant.")
    span.set_attribute("gen_ai.prompt.1.role", "user")
    span.set_attribute("gen_ai.prompt.1.content", "What is the capital of France?")
 
    span.set_attribute("gen_ai.completion.0.role", "assistant")
    span.set_attribute("gen_ai.completion.0.content", "The capital of France is Paris.")
 
    span.set_attribute("gen_ai.request.model", "gpt-4o-mini")
    span.set_attribute("gen_ai.request.temperature", 0.5)
    span.set_attribute("gen_ai.usage.prompt_tokens", 10)
    span.set_attribute("gen_ai.usage.completion_tokens", 30)
 
# Export a span using JSON-serialized attributes.
with tracer.start_as_current_span("GenAI JSON-Serialized Attributes") as span:
    span.set_attribute(
        "gen_ai.prompt_json",
        json.dumps(
            [
                {"role": "system", "content": "You are a helpful assistant."},
                {"role": "user", "content": "What is the capital of Italy?"},
            ]
        ),
    )
    span.set_attribute(
        "gen_ai.completion_json",
        json.dumps(
            [
                {"role": "assistant", "content": "The capital of Italy is Rome."},
            ]
        ),
    )
 
# Export a span using the `braintrust` namespace.
with tracer.start_as_current_span("Braintrust Attributes") as span:
    span.set_attribute("braintrust.input.0.role", "system")
    span.set_attribute("braintrust.input.0.content", "You are a helpful assistant.")
    span.set_attribute("braintrust.input.1.role", "user")
    span.set_attribute("braintrust.input.1.content", "What is the capital of Libya?")
 
    span.set_attribute("braintrust.output.0.role", "assistant")
    span.set_attribute("braintrust.output.0.content", "The capital of Brazil is Brasilia.")
 
    span.set_attribute("braintrust.metadata.model", "gpt-4o-mini")
    span.set_attribute("braintrust.metadata.country", "Brazil")
    span.set_attribute("braintrust.metrics.prompt_tokens", 10)
    span.set_attribute("braintrust.metrics.completion_tokens", 20)
 
# Export a span using JSON-serialized `braintrust` attributes.
with tracer.start_as_current_span("Braintrust JSON-Serialized Attributes") as span:
    span.set_attribute(
        "braintrust.input_json",
        json.dumps(
            [
                {"role": "system", "content": "You are a helpful assistant."},
                {"role": "user", "content": "What is the capital of Argentina?"},
            ]
        ),
    )
    span.set_attribute(
        "braintrust.output_json",
        json.dumps(
            [
                {"role": "assistant", "content": "The capital of Argentina is Buenos Aires."},
            ]
        ),
    )
    span.set_attribute(
        "braintrust.metadata",
        json.dumps({"model": "gpt-4o-mini", "country": "Argentina"}),
    )
    span.set_attribute(
        "braintrust.metrics",
        json.dumps({"prompt_tokens": 15, "completion_tokens": 45}),
    )

Vercel AI SDK

The Vercel AI SDK is an elegant tool for building AI-powered applications. You can wrap the SDK in Braintrust to automatically log your requests.

import { initLogger, wrapAISDKModel } from "braintrust";
import { openai } from "@ai-sdk/openai";
 
const logger = initLogger({
  projectName: "My Project",
  apiKey: process.env.BRAINTRUST_API_KEY,
});
 
const model = wrapAISDKModel(openai.chat("gpt-3.5-turbo"));
 
async function main() {
  // This will automatically log the request, response, and metrics to Braintrust
  const response = await model.doGenerate({
    inputFormat: "messages",
    mode: {
      type: "regular",
    },
    prompt: [
      {
        role: "user",
        content: [{ type: "text", text: "What is the capital of France?" }],
      },
    ],
  });
  console.log(response);
}
 
main();

Instructor

To use Instructor to generate structured outputs, you need to wrap the OpenAI client with both Instructor and Braintrust. It's important that you call Braintrust's wrap_openai first, because it uses low-level usage info and headers returned by the OpenAI call to log metrics to Braintrust.

import instructor
from braintrust import init_logger, load_prompt, wrap_openai
 
logger = init_logger(project="Your project name")
 
 
def run_prompt(text: str):
    # Replace with your project name and slug
    prompt = load_prompt("Your project name", "Your prompt name")
 
    # wrap_openai will make sure the client tracks usage of the prompt.
    client = instructor.patch(wrap_openai(OpenAI()))
 
    # Render with parameters
    return client.chat.completions.create(**prompt.build(input=text), response_model=MyResponseModel)

LangChain

Trace your LangChain applications by configuring a global LangChain callback handler.

import {
  BraintrustCallbackHandler,
  setGlobalHandler,
} from "@braintrust/langchain-js";
import { ConsoleCallbackHandler } from "@langchain/core/tracers/console";
import { ChatOpenAI } from "@langchain/openai";
import { initLogger } from "braintrust";
 
initLogger({
  projectName: "My Project",
  apiKey: process.env.BRAINTRUST_API_KEY,
});
 
const handler = new BraintrustCallbackHandler();
 
setGlobalHandler(handler);
 
async function main() {
  const model = new ChatOpenAI({ modelName: "gpt-4o-mini" });
 
  await model.invoke("What is the capital of France?", {
    callbacks: [new ConsoleCallbackHandler()], // alternatively, you can manually pass the handler here instead of setting the handler globally
  });
}
 
main();

Learn more about LangChain callbacks in their documentation.

On this page