Human review
Although Braintrust helps you automatically evaluate AI software, human review is a critical part of the process. Braintrust seamlessly integrates human feedback from end users, subject matter experts, and product teams in one place. You can use human review to evaluate/compare experiments, assess the efficacy of your automated scoring methods, and curate log events to use in your evals.
Configuring human review
To set up human review, define the scores you want to collect in your project's Configuration tab.
Select Add human review score to configure a new score. A score can be one of
- Continuous number value between
0%
and100%
, with a slider input control. - Categorical value where you can define the possible options and their scores. Categorical value options
are also assigned a unique percentage value between
0%
and100%
(stored as 0 to 1). - Free-form text where you can write a string value to the
metadata
field at a specified path.
Created human review scores will appear in the Human review section in every experiment and log trace in the project. Categorical scores configured to "write to expected" and free-form scores will also appear on dataset rows.
Writing to expected fields
You may choose to write categorical scores to the expected
field of a span instead of a score.
To enable this, check the Write to expected field instead of score option. There is also
an option to Allow multiple choice when writing to the expected field.
A numeric score will not be assigned to the categorical options when writing to the expected field. If there is an existing object in the expected field, the categorical value will be appended to the object.
In addition to categorical scores, you can always directly edit the structured output for the expected
field of any span through the UI.
Reviewing logs and experiments
To manually review results from your logs or experiment, select a row to open trace view. There, you can edit the human review scores you previously configured.
As you set scores, they will be automatically saved and reflected in the summary metrics. The process is the same whether you're reviewing logs or experiments.
Leaving comments
In addition to setting scores, you can also add comments to spans and update their expected
values. These updates
are tracked alongside score updates to form an audit trail of edits to a span.
If you leave a comment that you want to share with a teammate, you can copy a link that will deeplink to the comment.
Focused review mode
If you or a subject matter expert is reviewing a large number of logs or experiments, you can use Review mode to enter a UI that's optimized specifically for review. To enter review mode, hit the "r" key or the expand () icon next to the Human review header in a span.
In review mode, you can set scores, leave comments, and edit expected values. Review mode is optimized for keyboard navigation, so you can quickly move between scores and rows with keyboard shortcuts. You can also share a link to the review mode view with other team members, and they'll drop directly into review mode.
Reviewing data that matches a specific criteria
To easily review a subset of your logs or experiments that match a given criteria, you can filter using English or BTQL, then enter review mode.
In addition to filters, you can use tags to mark items for Triage
, and then review them all at once.
You can also save any filters, sorts, or column configurations as views. Views give you a standardized place to see any current or future logs that match a given criteria, for example, logs with a Factuality score less than 50%. Once you create your view, you can enter review mode right from there.
Reviewing is a common task, and therefore you can enter review mode from any experiment or log view. You can also re-enter review mode from any view to audit past reviews or update scores.
Benefits over an annotation queue
-
Designed for optimal productivity: The combination of views and human review mode simplifies the review process with intuitive filters, reusable configurations, and keyboard navigation, enabling faster, more efficient log evaluation and feedback.
-
Dynamic and flexible views: Views dynamically update with new logs matching saved criteria, eliminating the need to set up and maintain complex automation rules.
-
Easy collaboration: Sharing review mode links allows for team collaboration without requiring intricate permissions or setup overhead.
Filtering using feedback
You can filter on log events with specific scores by typing a filter like "scores.Preference > 0.75" in the search bar, and then add the matching rows to a dataset for further investigation. This is a powerful way to utilize human feedback to improve your evals.
Capturing end-user feedback
The same set of updates — scores, comments, and expected values — can be captured from end-users as well. See the Logging guide for more details.