Functions

Braintrust functions are atomic, reusable building blocks for executing AI-related logic. Functions are hosted and remotely executed in a performant serverless environment and are fully intended to be used in production. Functions can be invoked through the API, SDK, or the UI, and have built-in support for streaming and structured outputs.

There are currently three types of functions in Braintrust:

  • Prompts
    Templated messages to send to an LLM.
  • Tools
    General purpose code that can be invoked by LLMs.
  • Scorers
    Functions for scoring the quality of LLM outputs (a number from 0 to 1).

Composability

Functions can be composed together to produce sophisticated applications that would otherwise require brittle orchestration logic.

Functions flow

In this diagram, a prompt is being invoked with an input and is calling two different tools and scorers to ultimately produce a streaming output. Out of the box, you also get automatic tracing, including the tool calls and scores.

Any function can be used as a tool, which can be called, and its output added to the chat history. For example, a RAG agent can be defined as just two components:

  • A vector search tool, toolRAG, implemented in TypeScript or Python, which embeds a query, searches for relevant documents, and returns them
import { OpenAI } from "openai";
 
async ({ query, top_k }) => {
  const embedding = await openai.embeddings
    .create({
      input: query,
      model: "text-embedding-3-small",
    })
    .then((res) => res.data[0].embedding);
 
  const queryResponse = await pc.query({
    vector: embedding,
    topK: top_k,
    includeMetadata: true,
  });
 
  return queryResponse.matches.map((match) => ({
    title: match.metadata?.title,
    content: match.metadata?.content,
  }));
};
  • A system prompt containing instructions for how to retrieve content and synthesize answers using the tool
import * as braintrust from "braintrust";
 
const project = braintrust.projects.create({ name: "Doc Search" });
 
project.prompts.create({
  name: "Doc Search",
  slug: "document-search",
  description:
    "Search through the Braintrust documentation to answer the user's question",
  model: "gpt-4o-mini",
  messages: [
    {
      role: "system",
      content:
        "You are a helpful assistant that can " +
        "answer questions about the Braintrust documentation.",
    },
    {
      role: "user",
      content: "{{{question}}}",
    },
  ],
  tools: [toolRAG],
});

To dig more into this example, check out the cookbook for Using functions to build a RAG agent.

Syncing functions via the SDK

You can sync functions between the Braintrust UI and your local codebase using the Braintrust SDK. Currently, this works for any prompts and tools written in TypeScript.

You can push tools and prompts written in Python to Braintrust using braintrust push, but pulling from Braintrust is not yet available.

To push a change from your codebase to the UI, run npx braintrust push <filename> from the command line. You can push one or more files or directories to Braintrust. If you specify a directory, all .ts files under that directory are pushed.

To pull a change from the UI to your codebase, run npx braintrust pull. For example, you can use the pull command to:

  • Download functions to public projects so others can use them
  • Pin your production environment to a specific prompt version without running them through Braintrust on the request path
  • Review changes to functions in pull requests

Code bundling

Braintrust bundles your code together with any libraries and dependencies for serverless execution.

Braintrust uses esbuild to bundle your code. Bundling works by creating a single JavaScript file that contains all the necessary code, reducing the risk of version mismatches and dependency errors when deploying functions.

Since esbuild statically analyzes your code, it cannot handle dynamic imports or runtime code modifications.

Once code is bundled and uploaded to the Braintrust UI, you cannot edit it directly in the UI. Any changes must be made locally in your codebase and pushed via the SDK.

Runtimes

There are three runtimes available for functions:

  • TypeScript (Node.js v18, v20)
  • Python (Python 3.11)
  • Calling model providers with a prompt via the AI proxy

Default Python packages

We include a set of Python packages available in the Braintrust code editor by default:

  • braintrust (latest)
  • autoevals (latest)
  • requests 2.32.2
  • openai 1.40.8

Uploading code to create a Python function will attempt to use the versions of the above packages (as well as pydantic) in your local environment.