Bedrock, Vertex AI, and universal structured outputs support
Braintrust now fully supports Amazon Bedrock and Google Vertex AI, giving developers access to more models through a unified interface. Structured output handling has also been improved, ensuring reliable JSON responses across most providers, including those that don't natively support structured outputs.
Expanded Bedrock and Vertex AI support
You can now use Amazon Bedrock and Vertex AI models in both the playground and AI proxy. This includes support for system prompts, tool calls, and multimodal inputs. Braintrust handles these integrations, so models from these platforms work without extra configuration.
Platform | Models supported |
---|---|
Amazon Bedrock | Nova, Titan, Anthropic, Llama, Mistral, and more |
Vertex AI | Gemini, Llama, Mistral, and more |
In the Vertex AI integration, we also support authentication as a principal or service account using an OAuth 2.0 token or a service account key.
Consistent structured outputs
Structured output handling is now extended across Anthopic, Bedrock, and any OpenAI-flavored models, like Llama on Fireworks, that support tool calls, ensuring consistent JSON responses. Models that don’t natively support structured outputs now return predictable, machine-readable responses.
This eliminates the need for additional prompt engineering or post-processing, and makes it simple to test different models for your use case. Additionally, it improves the fidelity and recall of your model invocations by preventing wasted inputs when the generated response fails to adhere to your specific classification schema.
Simplified model selection in the playground
The playground model dropdown now groups models by provider and family. This makes it easier to find, compare, and test different models.
Additional updates
We've launched a few more updates to make sure the developer experience across AI providers is more reliable:
- We now keep custom models from overriding default configurations, so provider settings remain stable.
- Streaming JSON responses from non-OpenAI providers work correctly, allowing models to return data as expected.
- You can now add templated custom headers for custom AI providers.
We want Braintrust to be the best place to build with leading AI models and providers. If you have any feedback, please let us know. To try out structured outputs with Anthropic models, check out our new cookbook on spam classification!