TAU-bench Airline

TAU-bench is a benchmark for Tool-Agent-User Interaction in Real-World Domains. TAU-bench Airline evaluates AI agents on tasks in the airline domain, such as changing flights or finding new flights.

Paper: τ-bench: A Benchmark for Tool-Agent-User Interaction in Real-World Domains (Yao et al., 2024)

50
Tasks in Public Test Set
39
Evaluations (3 scaffolds, 15 models)

TAU-bench Airline Leaderboard

Costs are currently calculated without accounting for caching benefits.

Rank Scaffold Primary Model Verified Accuracy Cost (USD) Runs Traces
1
TAU-bench Few Shot Pareto optimal
Claude Opus 4 High (May 2025) 66.00% $313.83 1 Download
2 Claude Opus 4.1 High (August 2025) 62.00% $298.58 1 Download
3
TAU-bench Few Shot Pareto optimal
o4-mini High (April 2025) 60.00% $18.92 1 Download
4 Claude-3.7 Sonnet High (February 2025) 60.00% $37.23 1 Download
5 o4-mini High (April 2025) 56.00% $11.36 1 Download
6 Claude-3.7 Sonnet (February 2025) 56.00% $42.11 1 Download
7 GPT-4.1 (April 2025) 56.00% $42.58 1 Download
8 Claude Opus 4 (May 2025) 56.00% $363.30 1 Download
9 o3 Medium (April 2025) 54.00% $14.56 1 Download
10 Claude Opus 4.1 (August 2025) 54.00% $180.49 1 Download
11 Claude Opus 4.1 (August 2025) 54.00% $294.17 1 Download
12 Claude-3.7 Sonnet High (February 2025) 52.00% $31.94 1 Download
13 GPT-5 Medium (August 2025) 52.00% $35.49 1 Download
14 Claude Opus 4.1 High (August 2025) 52.00% $149.98 1 Download
15 Claude Opus 4.1 (August 2025) 50.00% $69.78 1 Download
16 GPT-5 Medium (August 2025) 48.00% $23.83 1 Download
17 o3 Medium (April 2025) 46.00% (-2.00/+2.00) $34.14 (-1.14/+1.14) 2 Download
18
TAU-bench Few Shot Pareto optimal
Gemini 2.0 Flash (February 2025) 44.00% $4.44 1 Download
19 DeepSeek V3 (March 2025) 44.00% $5.43 1 Download
20 Claude-3.7 Sonnet (February 2025) 44.00% $15.45 1 Download
21 Claude-3.7 Sonnet High (February 2025) 44.00% $34.58 1 Download
22 Claude Opus 4 (May 2025) 44.00% $150.15 1 Download
23 Claude Opus 4 High (May 2025) 44.00% $150.29 1 Download
24 o4-mini Low (April 2025) 36.00% $7.14 1 Download
25 GPT-4.1 (April 2025) 36.00% $8.18 1 Download
26 DeepSeek R1 (January 2025) 36.00% $13.30 1 Download
27 DeepSeek R1 (January 2025) 36.00% $31.75 1 Download
28 DeepSeek V3 (March 2025) 34.00% $30.60 1 Download
29 Claude-3.7 Sonnet (February 2025) 34.00% $36.45 1 Download
30 Claude Opus 4.1 High (August 2025) 32.00% $140.28 1 Download
31 GPT-5 Medium (August 2025) 30.00% $52.78 1 Download
32 Gemini 2.0 Flash High (February 2025) 28.00% $0.31 1 Download
33 Gemini 2.0 Flash (February 2025) 22.00% $2.00 1 Download
34 o4-mini Low (April 2025) 22.00% $20.16 1 Download
35 o3 Medium (April 2025) 20.00% $45.03 1 Download
36 DeepSeek V3 (March 2025) 18.00% $10.73 1 Download
37 o4-mini High (April 2025) 18.00% $20.57 1 Download
38 GPT-4.1 (April 2025) 16.00% $17.85 1 Download
39 DeepSeek R1 (January 2025) 10.00% $30.18 1 Download

Accuracy vs. Cost Frontier for TAU-bench Airline

This plot shows the relationship between an agent's performance and its token cost. The Pareto frontier (dashed line) represents the current state-of-the-art trade-off. The error bars indicate min-max values across runs.

Heatmap for TAU-bench Airline

The heatmap visualizes success rates across tasks and agents. Colorscale shows the fraction of times a task was solved across reruns of the same agent. The "any agent" performance indicates the level of saturation of the benchmark and gives a sense of overall progress.

Total Completion Tokens Used per Agent

The bar chart shows the total completion tokens used by each agent, with the height of each bar representing the total number of completion tokens used across all tasks. Secondary models usually contribute a relatively small amount of tokens in comparison, and are used for RAG or image processing only.

Model Performance Over Time

Timeline showing model accuracy evolution over release dates. Each point represents the best performance achieved by that model on TAU-bench Airline.

Token Pricing Configuration

Adjust token prices to see how they affect the total cost calculations in the leaderboard and plots.

Claude Opus 4 (May 2025)

Active
$
/1M tokens
$
/1M tokens

GPT-4o (August 2024)

Active
$
/1M tokens
$
/1M tokens

Claude Opus 4.1 (August 2025)

Active
$
/1M tokens
$
/1M tokens

Claude-3.7 Sonnet (February 2025)

Active
$
/1M tokens
$
/1M tokens

DeepSeek R1 (January 2025)

Active
$
/1M tokens
$
/1M tokens

DeepSeek V3 (March 2025)

Active
$
/1M tokens
$
/1M tokens

GPT-4.1 (April 2025)

Active
$
/1M tokens
$
/1M tokens

GPT-5 Medium (August 2025)

Active
$
/1M tokens
$
/1M tokens

Gemini 2.0 Flash (February 2025)

Active
$
/1M tokens
$
/1M tokens

o3 Medium (April 2025)

Active
$
/1M tokens
$
/1M tokens

o4-mini Medium (April 2025)

Active
$
/1M tokens
$
/1M tokens

Additional Resources

Getting Started

Want to evaluate your agent on TAU-bench Airline? Follow our guide to get started:

View Documentation

Task Details

Browse the complete TAU-bench Airline tasks:

View Tasks