Tools
Tool functions in Braintrust allow you to define general-purpose code that can be invoked by LLMs to add complex logic or external operations to your workflows. Tools are reusable and composable, making it easy to iterate on assistant-style agents and more advanced applications. You can create tools in TypeScript or Python and deploy them across the UI and API via prompts.
Creating a tool
Currently, you must define tools via code and push them to Braintrust with braintrust push
. To define a tool,
use project.tool.create
and pick a name and
unique slug:
Pushing to Braintrust
Once you define a tool, you can push it to Braintrust with braintrust push
:
Dependencies
Braintrust will take care of bundling the dependencies your tool needs.
In Typescript, we use esbuild to bundle your code and its dependencies together. This works for most dependencies, but it does not support native (compiled) libraries like SQLite.
If you have trouble bundling your dependencies, let us know by filing an issue.
Testing it out
If you visit the project in the UI, you'll see the tool listed on the Tools page in the Library:
Using tools
Once you define a tool in Braintrust, you can access it through the UI and API. However, the real advantage lies in calling a tool from an LLM. Most models support tool calling, which allows them to select a tool from a list of available options. Normally, it's up to you to execute the tool, retrieve its results, and re-run the model with the updated context.
Braintrust simplifies this process dramatically by:
- Automatically passing the tool's definition to the model
- Running the tool securely in a sandbox environment when called
- Re-running the model with the tool's output
- Streaming the whole output along with intermediate progress to the client
Let's walk through an example.
Defining a GitHub tool
Let's define a tool that looks up information about the most recent commit in a GitHub repository:
If you save this file locally to github.ts
or github.py
, you can run
to push the function to Braintrust. Once the command completes, you should see the function listed in the Library's Tools tab.
To use a tool, simply select it in the Tools dropdown in your Prompt window. Braintrust will automatically:
- Include it in the list of available tools to the model
- Invoke the tool if the model calls it, and append the result to the message history
- Call the model again with the tool's result as context
- Continue for up to (default) 5 iterations or until the model produces a non-tool result
Connecting tools to prompts in code
You can also attach a tool to a prompt defined in code. For example:
If you run braintrust push
on this file, Braintrust will push both the tool and the prompt.
Structured outputs
Another use case for tool calling is to coerce a model into producing structured outputs that match a given JSON schema. You can do this without creating a tool function, and instead use the Raw tab in the Tools dropdown.
Enter an array of tool definitions following the OpenAI tool format:
By default, if a tool is called, Braintrust will return the arguments of the first tool call as a JSON object. If you use the invoke
API,
you'll receive a JSON object as the result.
If you specify parallel
as the mode, then instead of the first tool call's arguments, you'll receive an array of all tool calls including
both function names and arguments.