Skip to main content
Prompts are the instructions that guide model behavior. Braintrust lets you create prompts, test them in playgrounds, use them in code, and track their performance over time.

Create prompts

Create prompts directly in the Braintrust UI:
  1. Go to Prompts and click + Prompt
  2. Configure the prompt:
    • Name: Descriptive display name
    • Slug: Unique identifier for code references (remains constant across updates)
    • Model and parameters: Model selection, temperature, max tokens, etc.
    • Messages: System, user, assistant, or tool messages with text or images
    • Templating syntax: Mustache or Nunjucks for variable substitution
    • Response format: Freeform text, JSON object, or structured JSON schema
    • Description: Optional context about the prompt’s purpose
    • Metadata: Optional additional information
  3. Click Save as custom prompt

Use templating

Use templates to inject variables into prompts at runtime. Braintrust supports Mustache and Nunjucks templating:
  • Mustache (default): Simple variable substitution and basic logic
  • Nunjucks: Advanced templating with loops, conditionals, and filters

Mustache

Mustache is the default templating language.
Use {{variable}} to insert values:
Hello {{name}}! Your account balance is ${{balance}}.
Access nested object properties with dot notation:
User: {{user.name}}
Email: {{user.profile.email}}
City: {{user.profile.address.city}}
Use sections to iterate over arrays or conditionally show content:
{{#items}}
- {{name}}: ${{price}}
{{/items}}

{{#user}}
Welcome back, {{name}}!
{{/user}}
Use ^ to show content when a value is falsy or empty:
{{^items}}
No items found.
{{/items}}
Use {{! comment }} for comments that won’t appear in output:
{{! This is a comment explaining the template }}
Hello {{name}}!
If you want to preserve double curly brackets {{ and }} as plain text when using Mustache, change the delimiter tags:
{{=<% %>=}}
Return the number in the following format: {{ number }}

<% input.formula %>
Mustache supports strict mode, which throws an error when required template variables are missing:
const result = prompt.build(
  { name: "Alice" },
  {
    strict: true, // Throws if any required variables are missing
  },
);

Nunjucks

For more complex templating needs, use Nunjucks, which implements Jinja2 syntax in JavaScript.
Process arrays and iterate over data:
{% for item in items %}
- {{ item.name }}: {{ item.description }}
{% endfor %}
Loop variables provide useful metadata:
{% for product in products %}
{{ loop.index }}. {{ product.name }}{% if not loop.last %}, {% endif %}
{% endfor %}
Available loop variables: loop.index (1-indexed), loop.index0 (0-indexed), loop.first, loop.last, loop.length
Add logic to your prompts:
{% if user.age >= 18 %}
You are eligible to vote.
{% elif user.age >= 16 %}
You can get a driver's license.
{% else %}
You are a minor.
{% endif %}
Combine conditionals with loops:
{% for product in products %}
  {% if product.inStock %}
Available: {{ product.name }} - ${{ product.price }}
  {% endif %}
{% endfor %}
Transform data with built-in filters:
Hello {{ name | upper }}!
Your email is {{ email | lower }}.
Items: {{ items | join(", ") }}
Common filters:
  • upper, lower: Change case
  • title, capitalize: Capitalize text
  • join(separator): Join array elements
  • length: Get array or string length
  • default(value): Provide default value
  • replace(old, new): Replace text
Concatenate strings with ~:
{{ greeting ~ " " ~ name }}!
Full name: {{ firstName ~ " " ~ lastName }}
Access nested properties and array elements:
{{ user.profile.address.city }}
{{ items[0].name }}
{{ data.results[2].score }}
For complete Mustache syntax, see the Mustache documentation. For Nunjucks syntax and features, see the Nunjucks templating documentation.

Add tools

Tools extend your prompt’s capabilities by allowing the LLM to call functions during execution:
  • Query external APIs or databases
  • Perform calculations or data transformations
  • Retrieve information from vector stores or search engines
  • Execute custom business logic
To add tools to a prompt in the UI:
  1. When creating or editing a prompt, click Tools.
  2. Select tool functions from your library or add raw tools as JSON.
  3. Click Save tools.

Add MCP servers

Use public MCP (Model Context Protocol) servers to give your prompts access to external tools and data:
  • Evaluate complex tool calling workflows
  • Experiment with external APIs and services
  • Tune public MCP servers
MCP servers must be public and support OAuth authentication.
MCP servers are a UI-only feature. They work in playgrounds and experiments but not when invoked via SDK.

Add to a prompt

To add an MCP server to a prompt:
  1. When creating or editing a prompt, click MCP.
  2. Enable any available project-wide servers.
  3. To add a prompt-specific MCP server, click + MCP server:
    • Provide a name, the public URL of the server, and an optional description.
    • Click Add server.
    • Authenticate the MCP server in your browser.
For each MCP server, you’ll see a list of available tools. Tools are enabled by default, but you can disable individual tools or click Disable all. After testing a prompt-specific MCP server, you can promote it to a project-wide server by clicking > Save to project MCP servers.

Add to a project

Project-wide MCP servers are accessible across all projects in your organization:
  1. Go to Configuration > MCP.
  2. Click + MCP server and provide a name, the public URL of the server, and an optional description.
  3. Click Authenticate to authenticate the MCP server in your browser.
  4. Click Save.

Use in code

Reference prompts by slug to use them in your application:
import { initLogger } from "braintrust";

const logger = initLogger({ projectName: "My Project" });

const result = await logger.invoke("summarizer", {
  input: { text: "Long article text here..." },
});

console.log(result.output);
Using prompts this way:
  • Automatically logs inputs and outputs
  • Tracks which prompt version was used
  • Enables A/B testing different prompt versions
  • Lets you update prompts without code changes

Load a prompt

The loadPrompt()/load_prompt() function loads a prompt with caching support:
import { OpenAI } from "openai";
import { initLogger, loadPrompt, wrapOpenAI } from "braintrust";

const logger = initLogger({ projectName: "My Project" });
const client = wrapOpenAI(new OpenAI());

async function runPrompt() {
  const prompt = await loadPrompt({
    projectName: "My Project",
    slug: "summarizer",
    defaults: {
      model: "gpt-4o",
      temperature: 0.5,
    },
  });

  return client.chat.completions.create(
    prompt.build({ text: "Article to summarize..." }),
  );
}

Pin a specific version

Reference a specific version when loading prompts:
const prompt = await loadPrompt({
  projectName: "My Project",
  slug: "summarizer",
  version: "5878bd218351fb8e",
});

Assign to an environment

To assign a prompt to an environment:
  1. Go to Prompts.
  2. Open the prompt.
  3. Click the icon.
  4. Select an environment.
Once assigned, load prompts for that environment in your code:
import { loadPrompt } from "braintrust";

// Load from specific environment
const prompt = await loadPrompt({
  projectName: "My Project",
  slug: "my-prompt",
  environment: "production",
});

// Use conditional versioning
const prompt = await loadPrompt({
  projectName: "My Project",
  slug: "my-prompt",
  version: process.env.NODE_ENV === "production" ? "5878bd218351fb8e" : undefined,
});

Stream results

Stream prompt responses for real-time output:
import { invoke } from "braintrust";

async function main() {
  const result = await invoke({
    projectName: "My Project",
    slug: "summarizer",
    input: { text: "Article text..." },
    stream: true,
  });

  for await (const chunk of result) {
    console.log(chunk);
    // { type: "text_delta", data: "The summary "}
    // { type: "text_delta", data: "is..."}
  }
}

Add extra messages

Append additional messages to prompts for multi-turn conversations:
import { invoke } from "braintrust";

async function reflection(question: string) {
  const result = await invoke({
    projectName: "My Project",
    slug: "assistant",
    input: { question },
  });

  const reflectionResult = await invoke({
    projectName: "My Project",
    slug: "assistant",
    input: { question },
    messages: [
      { role: "assistant", content: result },
      { role: "user", content: "Are you sure about that?" },
    ],
  });

  return reflectionResult;
}

Test prompts

Playgrounds provide a no-code environment for rapid prompt iteration:
  1. Create or select a prompt
  2. Add a dataset or enter test inputs
  3. Run the prompt and view results
  4. Adjust parameters or messages
  5. Compare different versions side-by-side
See Use playgrounds for details. You can also test prompts by chatting directly with them from the prompt details page. Each chat interaction is automatically logged as a trace in your project’s logs. To navigate back to the prompt from these traces, see Navigate to trace origins.

Optimize with Loop

Use Loop to generate and improve prompts: Example queries:
  • “Generate a prompt for a chatbot that can answer questions about the product”
  • “Add few-shot examples based on project logs”
  • “Optimize this prompt to be friendlier and more engaging”
  • “Improve this prompt based on the experiment results”
Loop analyzes your data and suggests improvements automatically.

Version prompts

Every prompt change creates a new version automatically. This lets you:
  • Compare performance across versions
  • Roll back to previous versions
  • Pin experiments to specific versions
  • Track which version is used in production
View version history in the prompt editor and select any version to restore or compare.
You can manage different versions of prompts across your development lifecycle by assigning them to environments. See Assign to an environment above or Manage environments for details.

Best practices

Start simple: Begin with clear, direct instructions. Add complexity only when needed. Use few-shot examples: Include 2-3 examples in your prompt to guide model behavior. Be specific: Define exactly what you want, including format, tone, and constraints. Test with real data: Use production logs to build test datasets that reflect actual usage. Iterate systematically: Change one thing at a time and measure impact with experiments. Version everything: Save prompt changes so you can track what works and roll back if needed.

Create custom table views

The Prompts page supports custom table views to save your preferred filters, column order, and display settings. To create or update a custom table view:
  1. Apply the filters and display settings you want.
  2. Open the menu and select Save view… or Save view as….
Custom table views are visible to all project members. Creating or editing a table view requires the Update project permission.

Set default table views

You can set default views at two levels:
  • Organization default: Visible to all members when they open the page. This applies per page — for example, you can set separate organization defaults for Logs, Experiments, and Review. To set an organization default, you need the Manage settings organization permission (included by default in the Owner role). See Access control for details.
  • Personal default: Overrides the organization default for you only. Personal defaults are stored in your browser, so they do not carry over across devices or browsers.
To set a default view:
  1. Switch to the view you want by selecting it from the menu.
  2. Open the menu again and hover over the currently selected view to reveal its submenu.
  3. Choose Set as personal default view or Set as organization default view.
To clear a default view:
  1. Open the menu and hover over the currently selected view to reveal its submenu.
  2. Choose Clear personal default view or Clear organization default view.
When a user opens a page, Braintrust loads the first match in this order: personal default, organization default, then the standard “All …” view (e.g., “All logs view”).

Next steps