To set up Braintrust as an OpenTelemetry
backend, you'll need to route the traces to Braintrust's OpenTelemetry endpoint,
set your API key, and specify a parent project or experiment.
Braintrust supports a combination of common patterns from OpenLLMetry and popular libraries like the Vercel AI SDK. Behind the scenes, clients can point to Braintrust's API as an exporter, which makes it easy to integrate without having to install additional libraries or code. OpenLLMetry supports a range of languages including Python, TypeScript, Java, and Go, so it's an easy way to start logging to Braintrust from many different environments.
Once you set up an OpenTelemetry Protocol Exporter (OTLP) to send traces to Braintrust, we automatically
convert LLM calls into Braintrust LLM spans, which
can be saved as prompts
and evaluated in the playground.
For collectors that use the OpenTelemetry SDK to export traces, set the
following environment variables:
The trace endpoint URL is https://api.braintrust.dev/otel/v1/traces. If your exporter
uses signal-specific environment variables, you'll need to set the full path:
OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=https://api.braintrust.dev/otel/v1/traces
If you're self-hosting Braintrust, substitute your stack's Universal API URL. For example:
OTEL_EXPORTER_OTLP_ENDPOINT=https://dfwhllz61x709.cloudfront.net/otel
The x-bt-parent header sets the trace's parent project or experiment. You can use
a prefix like project_id:, project_name:, or experiment_id: here, or pass in
a span slug
(span.export()) to nest the trace under a span within the parent object.
To trace LLM calls with LlamaIndex, you can use the OpenInference LlamaIndexInstrumentor to send OTel traces directly to Braintrust. Configure your environment and set the OTel endpoint:
Now traced LLM calls will appear under the provided Braintrust project or experiment.
To use Instructor to generate structured outputs, you need to wrap the
OpenAI client with both Instructor and Braintrust. It's important that you call Braintrust's wrap_openai first,
because it uses low-level usage info and headers returned by the OpenAI call to log metrics to Braintrust.
To trace LangChain code in Braintrust, you can use the BraintrustTracer callback handler. The callback
handler is currently only supported in Python, but if you need support for other languages, please
let us know.
To use it, simply initialize a BraintrustTracer and pass it as a callback handler to langchain objects
you create.