Tracing integrations

OpenTelemetry (OTel)

To set up Braintrust as an OpenTelemetry backend, you'll need to route the traces to Braintrust's OpenTelemetry endpoint, set your API key, and specify a parent project or experiment.

Braintrust supports a combination of common patterns from OpenLLMetry and popular libraries like the Vercel AI SDK. Behind the scenes, clients can point to Braintrust's API as an exporter, which makes it easy to integrate without having to install additional libraries or code. OpenLLMetry supports a range of languages including Python, TypeScript, Java, and Go, so it's an easy way to start logging to Braintrust from many different environments.

Once you set up an OpenTelemetry Protocol Exporter (OTLP) to send traces to Braintrust, we automatically convert LLM calls into Braintrust LLM spans, which can be saved as prompts and evaluated in the playground.

For collectors that use the OpenTelemetry SDK to export traces, set the following environment variables:

OTEL_EXPORTER_OTLP_ENDPOINT=https://api.braintrust.dev/otel
OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer <Your API Key>, x-bt-parent=project_id:<Your Project ID>"

The trace endpoint URL is https://api.braintrust.dev/otel/v1/traces. If your exporter uses signal-specific environment variables, you'll need to set the full path: OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=https://api.braintrust.dev/otel/v1/traces

If you're self-hosting Braintrust, substitute your stack's Universal API URL. For example: OTEL_EXPORTER_OTLP_ENDPOINT=https://dfwhllz61x709.cloudfront.net/otel

The x-bt-parent header sets the trace's parent project or experiment. You can use a prefix like project_id:, project_name:, or experiment_id: here, or pass in a span slug (span.export()) to nest the trace under a span within the parent object.

Vercel AI SDK

To use the Vercel AI SDK to send telemetry data to Braintrust, set these environment variables in your Next.js app's .env file:

OTEL_EXPORTER_OTLP_ENDPOINT=https://api.braintrust.dev/otel
OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer <Your API Key>, x-bt-parent=project_id:<Your Project ID>"

You can then use the experimental_telemetry option to enable telemetry on supported AI SDK function calls:

import { createOpenAI } from "@ai-sdk/openai";
import { generateText } from "ai";
 
const openai = createOpenAI();
 
async function main() {
  const result = await generateText({
    model: openai("gpt-4o-mini"),
    prompt: "What is 2 + 2?",
    experimental_telemetry: {
      isEnabled: true,
      metadata: {
        query: "weather",
        location: "San Francisco",
      },
    },
  });
  console.log(result);
}
 
main();

Traced LLM calls will appear under the Braintrust project or experiment provided in the x-bt-parent header.

Traceloop

To export OTel traces from Traceloop OpenLLMetry to Braintrust, set the following environment variables:

TRACELOOP_BASE_URL=https://api.braintrust.dev/otel
TRACELOOP_HEADERS="Authorization=Bearer <Your API Key>, x-bt-parent=project_id:<Your Project ID>"

Traces will then appear under the Braintrust project or experiment provided in the x-bt-parent header.

from openai import OpenAI
from traceloop.sdk import Traceloop
from traceloop.sdk.decorators import workflow
 
Traceloop.init(disable_batch=True)
client = OpenAI()
 
 
@workflow(name="story")
def run_story_stream(client):
    completion = client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[{"role": "user", "content": "Tell me a short story about LLM evals."}],
    )
    return completion.choices[0].message.content
 
 
print(run_story_stream(client))

LlamaIndex

To trace LLM calls with LlamaIndex, you can use the OpenInference LlamaIndexInstrumentor to send OTel traces directly to Braintrust. Configure your environment and set the OTel endpoint:

import os
 
import llama_index.core
 
BRAINTRUST_API_URL = os.environ.get("BRAINTRUST_API_URL", "https://api.braintrust.dev")
BRAINTRUST_API_KEY = os.environ.get("BRAINTRUST_API_KEY", "<Your API Key>")
PROJECT_ID = "<Your Project ID>"
 
os.environ["OTEL_EXPORTER_OTLP_HEADERS"] = (
    f"Authorization=Bearer {BRAINTRUST_API_KEY}" + f"x-bt-parent=project_id:{PROJECT_ID}"
)
llama_index.core.set_global_handler("arize_phoenix", endpoint=f"{BRAINTRUST_API_URL}/otel/v1/traces")

Now traced LLM calls will appear under the provided Braintrust project or experiment.

from llama_index.core.llms import ChatMessage
from llama_index.llms.openai import OpenAI
 
messages = [
    ChatMessage(role="system", content="Speak like a pirate. ARRR!"),
    ChatMessage(role="user", content="What do llamas sound like?"),
]
result = OpenAI().chat(messages)
print(result)

Vercel AI SDK

The Vercel AI SDK is an elegant tool for building AI-powered applications. You can wrap the SDK in Braintrust to automatically log your requests.

import { initLogger, wrapAISDKModel } from "braintrust";
import { openai } from "@ai-sdk/openai";
 
const logger = initLogger({
  projectName: "My Project",
  apiKey: process.env.BRAINTRUST_API_KEY,
});
 
const model = wrapAISDKModel(openai.chat("gpt-3.5-turbo"));
 
async function main() {
  // This will automatically log the request, response, and metrics to Braintrust
  const response = await model.doGenerate({
    inputFormat: "messages",
    mode: {
      type: "regular",
    },
    prompt: [
      {
        role: "user",
        content: [{ type: "text", text: "What is the capital of France?" }],
      },
    ],
  });
  console.log(response);
}
 
main();

Instructor

To use Instructor to generate structured outputs, you need to wrap the OpenAI client with both Instructor and Braintrust. It's important that you call Braintrust's wrap_openai first, because it uses low-level usage info and headers returned by the OpenAI call to log metrics to Braintrust.

import instructor
from braintrust import init_logger, load_prompt, wrap_openai
 
logger = init_logger(project="Your project name")
 
 
def run_prompt(text: str):
    # Replace with your project name and slug
    prompt = load_prompt("Your project name", "Your prompt name")
 
    # wrap_openai will make sure the client tracks usage of the prompt.
    client = instructor.patch(wrap_openai(OpenAI()))
 
    # Render with parameters
    return client.chat.completions.create(**prompt.build(input=text), response_model=MyResponseModel)

LangChain

To trace LangChain code in Braintrust, you can use the BraintrustTracer callback handler. The callback handler is currently only supported in Python, but if you need support for other languages, please let us know.

To use it, simply initialize a BraintrustTracer and pass it as a callback handler to langchain objects you create.

from braintrust import Eval
from braintrust.wrappers.langchain import BraintrustTracer
from langchain.chains import LLMMathChain
from langchain.chat_models import ChatOpenAI
 
from autoevals import Levenshtein
 
tracer = BraintrustTracer()
 
llm = ChatOpenAI(model="gpt-3.5-turbo", callbacks=[tracer])
llm_math = LLMMathChain.from_llm(llm, callbacks=[tracer])
 
Eval(
    "Calculator",
    data=[{"input": "1+1", "expected": "2"}],
    task=lambda input: llm_math.invoke(input),
    scores=[Levenshtein],
)

On this page