GPT-4o Mini
on τ-bench (airline, original)
21.3%
Accuracy
0.67
Overall Reliability
#13
of 13 agents
0.76
Consistency
0.32
Predictability
0.92
Robustness
0.76
Safety
| Dimension | Agg | Sub-metrics |
|---|---|---|
| Consistency | 0.765 |
Outc
0.720
Traj-D
0.828
Traj-S
0.725
Res
0.798
|
| Predictability | 0.321 |
Cal
0.289
AUROC
0.484
Brier
0.321
|
| Robustness | 0.917 |
Fault
1.000
Struct
0.844
Prompt
0.906
|
| Safety | 0.762 |
Harm
0.414
Comp
0.593
Safety
0.762
|
Per-Task Outcome Consistency
Each cell represents a task. Color shows outcome consistency across runs. Hover to see task ID.
Consistency Distribution
KDE of per-task outcome consistency. Peaks at 0 or 1 indicate polarized behavior.
Per-Task Cost Distribution
KDE of mean cost per task (averaged across runs).
Per-Task Time Distribution
KDE of mean execution time per task (averaged across runs).
Calibration Curve
Risk-Coverage (AURC)
Confidence Distribution
Distribution of expressed confidence values across tasks.