While AI provider wrappers automatically log LLM calls, you often need to trace additional application logic like data retrieval, preprocessing, business logic, or tool invocations. Custom tracing lets you capture these operations.
Braintrust SDKs provide tools to trace function execution and capture inputs, outputs, and errors:
Python SDK uses the @traced decorator to automatically wrap functions
TypeScript SDK uses wrapTraced() to create traced function wrappers
Go SDK uses OpenTelemetry’s manual span management with tracer.Start() and span.End()
All approaches achieve the same result—capturing function-level observability—but with different ergonomics suited to each language’s idioms.
Report incorrect code
Copy
Ask AI
import { initLogger, wrapTraced } from "braintrust";const logger = initLogger({ projectName: "My Project" });// Wrap a function to trace it automaticallyconst fetchUserData = wrapTraced(async function fetchUserData(userId: string) { // This function's input (userId) and output (return value) are logged const response = await fetch(`/api/users/${userId}`); return response.json();});// Use the function normallyconst userData = await fetchUserData("user-123");
The traced function automatically creates a span with:
Enrich spans with custom metadata and tags to make them easier to filter and analyze. Tags can be applied to any span in a trace, including nested spans, and are automatically aggregated at the trace level:
Tags from all spans in a trace are aggregated together at the trace level. When you log additional tags to the same span, they are automatically merged (union) rather than replaced, allowing you to add contextual tags throughout your application logic.
If you pass a non-string value (like an object or array) to the name field of a span, your logs will not appear in the UI - they will be hidden due to schema validation failure. Span names must always be strings.Before passing a value to the name parameter in tracing functions, ensure it is a string: