Why deploy with Braintrust
Deploying through Braintrust gives you:- Unified API: Call any AI provider (OpenAI, Anthropic, Google, AWS, etc.) through a single interface. Use any supported provider’s SDK to call any provider’s models.
- Automatic observability: Every production request is logged and traceable.
- Caching: Reduce costs and latency with built-in response caching.
- Version control: Deploy prompts and functions with full version history.
- Environment management: Separate dev, staging, and production configurations.
- Fallbacks: Automatically retry failed requests with backup providers.
Deploy prompts and functions
Prompts created in Braintrust can be called from your application code. Changes to prompts in the UI immediately affect production behavior, enabling rapid iteration without redeployment.Use the Braintrust gateway
The Braintrust gateway provides a unified API to access LLM models from OpenAI, Anthropic, Google, AWS, Mistral, and third-party providers. Point your SDKs to the gateway URL and immediately get automatic caching, observability, and multi-provider support.Manage environments
Environments separate your development, staging, and production configurations. Set different prompts, functions, or API keys per environment:Monitor deployments
Every production request flows through the same observability system you used during development. View logs, filter by errors, score online, and create dashboards to track performance. Set up alerts to notify you when error rates spike or latency exceeds thresholds.Next steps
- Use the Braintrust gateway to route requests to any AI provider with production-grade reliability
- Deploy prompts to ship and version prompts in production
- Deploy functions to deploy tools, scorers, and workflows
- Monitor deployments to track production performance and errors
- Manage environments to separate dev, staging, and production