Tracing integrations
OpenTelemetry (OTel)
To set up Braintrust as an OpenTelemetry backend, you'll need to route the traces to Braintrust's OpenTelemetry endpoint, set your API key, and specify a parent project or experiment.
Braintrust supports a combination of common patterns from OpenLLMetry and popular libraries like the Vercel AI SDK. Behind the scenes, clients can point to Braintrust's API as an exporter, which makes it easy to integrate without having to install additional libraries or code. OpenLLMetry supports a range of languages including Python, TypeScript, Java, and Go, so it's an easy way to start logging to Braintrust from many different environments.
Once you set up an OpenTelemetry Protocol Exporter (OTLP) to send traces to Braintrust, we automatically
convert LLM calls into Braintrust LLM
spans, which
can be saved as prompts
and evaluated in the playground.
For collectors that use the OpenTelemetry SDK to export traces, set the following environment variables:
The trace endpoint URL is https://api.braintrust.dev/otel/v1/traces
. If your exporter
uses signal-specific environment variables, you'll need to set the full path:
OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=https://api.braintrust.dev/otel/v1/traces
If you're self-hosting Braintrust, substitute your stack's Universal API URL. For example:
OTEL_EXPORTER_OTLP_ENDPOINT=https://dfwhllz61x709.cloudfront.net/otel
The x-bt-parent
header sets the trace's parent project or experiment. You can use
a prefix like project_id:
, project_name:
, or experiment_id:
here, or pass in
a span slug
(span.export()
) to nest the trace under a span within the parent object.
Vercel AI SDK
To use the Vercel AI SDK to send
telemetry data to Braintrust, set these environment variables in your Next.js
app's .env
file:
You can then use the experimental_telemetry
option to enable telemetry on
supported AI SDK function calls:
Traced LLM calls will appear under the Braintrust project or experiment
provided in the x-bt-parent
header.
Traceloop
To export OTel traces from Traceloop OpenLLMetry to Braintrust, set the following environment variables:
Traces will then appear under the Braintrust project or experiment provided in
the x-bt-parent
header.
LlamaIndex
To trace LLM calls with LlamaIndex, you can use the OpenInference LlamaIndexInstrumentor
to send OTel traces directly to Braintrust. Configure your environment and set the OTel endpoint:
Now traced LLM calls will appear under the provided Braintrust project or experiment.
Manual Tracing
If you want to log LLM calls directly to the OTel endpoint, you can set up a custom OpenTelemetry tracer and add the appropriate attributes to your spans. This gives you fine-grained control over what data gets logged.
Braintrust implements the OpenTelemetry GenAI semantic conventions. When you send traces with these attributes, they are automatically mapped to Braintrust fields.
Attribute | Braintrust Field | Description |
---|---|---|
gen_ai.prompt | input | User message (string). If you have an array of messages, you'll need to use gen_ai.prompt_json (see below) or set flattened attributes like gen_ai.prompt.0.role or gen_ai.prompt.0.content . |
gen_ai.prompt_json | input | A JSON-serialized string containing an array of OpenAI messages. |
gen_ai.completion | output | Assistant message (string). Note that if you have an array of messages, you'll need to use gen_ai.completion_json (see below) or set flattened attributes like gen_ai.completion.0.role or gen_ai.completion.0.content . |
gen_ai.completion_json | output | A JSON-serialized string containing an array of OpenAI messages. |
gen_ai.request.model | metadata.model | The model name (e.g. "gpt-4o") |
gen_ai.request.max_tokens | metadata.max_tokens | max_tokens |
gen_ai.request.temperature | metadata.temperature | temperature |
gen_ai.request.top_p | metadata.top_p | top_p |
gen_ai.usage.prompt_tokens | metrics.prompt_tokens | Input tokens |
gen_ai.usage.completion_tokens | metrics.completion_tokens | Output tokens |
You can also use the braintrust
namespace to set fields in Braintrust directly:
Attribute | Braintrust Field | Notes |
---|---|---|
braintrust.input | input | Typically a single user message (string). If you have an array of messages, use braintrust.input_json instead (see below) or set flattened attributes like braintrust.input.0.role or braintrust.input.0.content . |
braintrust.input_json | input | A JSON-serialized string containing an array of OpenAI messages. |
braintrust.output | output | Typically a single assistant message (string). If you have an array of messages, use braintrust.output_json instead (see below) or set flattened attributes like braintrust.output.0.role or braintrust.output.0.content . |
braintrust.output_json | output | A JSON-serialized string containing an array of OpenAI messages. |
braintrust.metadata | metadata | A JSON-serialized dictionary with string keys. Alternatively, you can use flattened attribute names, like braintrust.metadata.model or braintrust.metadata.temperature . |
braintrust.metrics | metrics | A JSON-serialized dictionary with string keys. Alternatively, you can use flattened attribute names, like braintrust.metrics.prompt_tokens or braintrust.metrics.completion_tokens . |
Here's an example of how to set up manual tracing:
Vercel AI SDK
The Vercel AI SDK is an elegant tool for building AI-powered applications. You can wrap the SDK in Braintrust to automatically log your requests.
Instructor
To use Instructor to generate structured outputs, you need to wrap the
OpenAI client with both Instructor and Braintrust. It's important that you call Braintrust's wrap_openai
first,
because it uses low-level usage info and headers returned by the OpenAI call to log metrics to Braintrust.
LangChain
Trace your LangChain applications by configuring a global LangChain callback handler.
Learn more about LangChain callbacks in their documentation.