TAU-bench Airline
TAU-bench is a benchmark for Tool-Agent-User Interaction in Real-World Domains. TAU-bench Airline evaluates AI agents on tasks in the airline domain, such as changing filghts or finding new flights.
Paper: τ-bench: A Benchmark for Tool-Agent-User Interaction in Real-World Domains (Yao et al., 2024)
TAU-bench Airline Leaderboard
Rank | Agent |
Primary Model
Primary Model
This is the primary model used by the agent. In some cases, an embedding model is used for RAG, or a secondary model like GPT-4o for image processing. Note: For non-OpenAI reasoning models, the reasoning token budget is set at 1,024 (low), 2,048 (medium), and 4,096 (high). |
Verified
Verified Results
Results have been reproduced by the HAL team |
Accuracy
Accuracy
Confidence intervals show the min-max values across runs for those agents where multiple runs are available |
Cost (USD)
Total Cost
Total API cost for running the agent on all tasks. Confidence intervals show the min-max values across runs for those agents where multiple runs are available |
Runs
Number of Runs
The number of runs for this agent submitted to the leaderboard. To submit multiple evaluations, rerun the same agent and set the same agent name |
Traces |
---|---|---|---|---|---|---|---|
1 |
TAU-bench Few Shot
Pareto optimal
|
Claude Opus 4 High (May 2025) | ✓ | 66.00% | $313.83 | 1 | Download |
2 | Claude Opus 4.1 High (August 2025) | ✓ | 62.00% | $298.58 | 1 | Download | |
3 |
TAU-bench Few Shot
Pareto optimal
|
o4-mini High (April 2025) | ✓ | 60.00% | $18.92 | 1 | Download |
4 | Claude-3.7 Sonnet High (February 2025) | ✓ | 60.00% | $37.23 | 1 | Download | |
5 | Claude-3.7 Sonnet (February 2025) | ✓ | 56.00% | $42.11 | 1 | Download | |
6 | GPT-4.1 (April 2025) | ✓ | 56.00% | $42.58 | 1 | Download | |
7 | Claude Opus 4 (May 2025) | ✓ | 56.00% | $363.30 | 1 | Download | |
8 | Claude Opus 4.1 (August 2025) | ✓ | 54.00% | $294.17 | 1 | Download | |
9 | Claude Opus 4.1 (August 2025) | ✓ | 54.00% | $180.49 | 1 | Download | |
10 | GPT-5 Medium (August 2025) | ✓ | 52.00% | $35.49 | 1 | Download | |
11 | o4-mini Low (April 2025) | ✓ | 48.00% | $18.81 | 1 | Download | |
12 | o3 Medium (April 2025) | ✓ | 46.00% (-2.00/+2.00) | $34.14 (-1.14/+1.14) | 2 | Download | |
13 | Claude Opus 4 (May 2025) | ✓ | 44.00% | $150.15 | 1 | Download | |
14 | Claude Opus 4 High (May 2025) | ✓ | 44.00% | $150.29 | 1 | Download | |
15 |
TAU-bench Few Shot
Pareto optimal
|
Gemini 2.0 Flash | ✓ | 44.00% | $4.44 | 1 | Download |
16 | Claude-3.7 Sonnet High (February 2025) | ✓ | 44.00% | $34.58 | 1 | Download | |
17 | DeepSeek R1 | ✓ | 36.00% | $5.66 | 1 | Download | |
18 | Claude-3.7 Sonnet (February 2025) | ✓ | 34.00% | $36.45 | 1 | Download | |
19 | DeepSeek V3 | ✓ | 34.00% | $9.08 | 1 | Download | |
20 | Claude Opus 4.1 High (August 2025) | ✓ | 32.00% | $140.28 | 1 | Download | |
21 | GPT-5 Medium (August 2025) | ✓ | 30.00% | $52.78 | 1 | Download | |
22 |
HAL Generalist Agent
Pareto optimal
|
Gemini 2.0 Flash | ✓ | 22.00% | $2.00 | 1 | Download |
23 | o4-mini Low (April 2025) | ✓ | 22.00% | $20.16 | 1 | Download | |
24 | o3 Medium (April 2025) | ✓ | 20.00% | $45.03 | 1 | Download | |
25 | DeepSeek V3 | ✓ | 18.00% | $2.58 | 1 | Download | |
26 | o4-mini High (April 2025) | ✓ | 18.00% | $20.57 | 1 | Download | |
27 | GPT-4.1 (April 2025) | ✓ | 16.00% | $17.85 | 1 | Download | |
28 | DeepSeek R1 | ✓ | 10.00% | $2.91 | 1 | Download |
Accuracy vs. Cost Frontier for TAU-bench Airline
This plot shows the relationship between an agent's performance and its token cost. The Pareto frontier (dashed line) represents the current state-of-the-art trade-off. The error bars indicate min-max values across runs.
Heatmap for TAU-bench Airline
The heatmap visualizes success rates across tasks and agents. Colorscale shows the fraction of times a task was solved across reruns of the same agent. The "any agent" performance indicates the level of saturation of the benchmark and gives a sense of overall progress.
Total Completion Tokens Used per Agent
The bar chart shows the total completion tokens used by each agent, with the height of each bar representing the total number of completion tokens used across all tasks. Secondary models usually contribute a relatively small amount of tokens in comparison, and are used for RAG or image processing only.
Model Performance Over Time
Timeline showing model accuracy evolution over release dates. Each point represents the best performance achieved by that model on TAU-bench Airline.
Token Pricing Configuration
Adjust token prices to see how they affect the total cost calculations in the leaderboard and plots.
Additional Resources
Getting Started
Want to evaluate your agent on TAU-bench Airline? Follow our guide to get started:
View Documentation