AssistantBench

AssistantBench evaluates AI agents on realistic, time-consuming, and automatically verifiable tasks. It consists of 214 tasks that are based on real human needs and require several minutes of human browsing.

Paper: AssistantBench: Can Web Agents Solve Realistic and Time-Consuming Tasks? (Yoran et al., 2024)

33
Total Tasks
12
Evaluations (1 scaffold, 12 models)

Key Features of AssistantBench

Realistic, Time-Consuming Web Tasks

Tasks are based on real human needs and require multiple steps (several minutes) of web browsing to solve (e.g., finding specific business information, analyzing market trends, planning travel, etc.).

Diverse Domains

Tasks were created by everyday users, crowdworkers, and domain experts and cover a wide variety of domains.

Automatic Evaluation

Tasks were designed with closed-form, verifiable answers, allowing for automatic evaluations.

AssistantBench Leaderboard

Rank Scaffold Models Verified Accuracy Cost (USD) Runs Traces
1
Browser-Use Pareto optimal
o3 Medium (April 2025) 38.81% $15.15 1 Download
2 GPT-5 Medium (August 2025) 35.23% $41.69 1 Download
3
Browser-Use Pareto optimal
o4-mini Low (April 2025) 28.05% $9.22 1 Download
4 o4-mini High (April 2025) 23.84% $16.39 1 Download
5 GPT-4.1 (April 2025) 17.39% $14.15 1 Download
6 Claude-3.7 Sonnet (February 2025) 16.69% $56.00 1 Download
7 Claude Opus 4.1 High (August 2025) 13.75% $779.72 1 Download
8 Claude-3.7 Sonnet High (February 2025) 13.08% $16.13 1 Download
9 DeepSeek R1 (May 2025) 8.75% $18.18 1 Download
10 Claude Opus 4.1 (August 2025) 7.26% $385.43 1 Download
11 Gemini 2.0 Flash (February 2025) 2.62% $2.18 1 Download
12 DeepSeek V3 (March 2025) 2.03% $12.66 1 Download

Accuracy vs. Cost Frontier for AssistantBench

This plot shows the relationship between an agent's performance and its token cost. The Pareto frontier (dashed line) represents the current state-of-the-art trade-off. The error bars indicate min-max values across runs.

Total Completion Tokens Used per Agent

The bar chart shows the total completion tokens used by each agent, with the height of each bar representing the total number of completion tokens used across all tasks. Secondary models usually contribute a relatively small amount of tokens in comparison, and are used for RAG or image processing only.

Model Performance Over Time

Track how model accuracy has evolved over time since their release dates. Each point represents the best performance achieved by that model on AssistantBench.

Token Pricing Configuration

Adjust token prices to see how they affect the total cost calculations in the leaderboard and plots.

Claude Opus 4.1 (August 2025)

Active
$
/1M tokens
$
/1M tokens

Claude-3.7 Sonnet (February 2025)

Active
$
/1M tokens
$
/1M tokens

DeepSeek R1 (May 2025)

Active
$
/1M tokens
$
/1M tokens

DeepSeek V3 (March 2025)

Active
$
/1M tokens
$
/1M tokens

GPT-4.1 (April 2025)

Active
$
/1M tokens
$
/1M tokens

GPT-5 Medium (August 2025)

Active
$
/1M tokens
$
/1M tokens

Gemini 2.0 Flash (February 2025)

Active
$
/1M tokens
$
/1M tokens

o3 Medium (April 2025)

Active
$
/1M tokens
$
/1M tokens

o4-mini Medium (April 2025)

Active
$
/1M tokens
$
/1M tokens

Additional Resources

Getting Started

Want to evaluate your agent on AssistantBench ? Follow our comprehensive guide to get started:

View Documentation

Task Details

Browse the complete list of AssistantBench tasks on HuggingFace:

View Tasks