ScienceAgentBench
ScienceAgentBench is a benchmark for rigourously evaluating the ability of language agents to conduct data-driven scientific discovery.
Key Features of ScienceAgentBench
Real-World Scientific Tasks
All tasks are sourced from 44 peer-reviewed publications.
Expert Validation
Each of the 102 tasks was validated by nine subject matter experts (senior PhD students and professors) to ensure quality.
Rigourous and Fair Evaluation
Evaluation metrics are diverse and effective strategies are employed to mitigate data contamination, ensure annotation quality and scientific plausibility.
ScienceAgentBench Leaderboard
Rank | Scaffold | Models |
Verified
Verified Results
Results have been reproduced by the HAL team |
Accuracy
Accuracy
Confidence intervals show the min-max values across runs for those agents where multiple runs are available |
Cost (USD)
Total Cost
Total API cost for running the agent on all tasks. Confidence intervals show the min-max values across runs for those agents where multiple runs are available |
Runs
Number of Runs
The number of runs for this agent submitted to the leaderboard. To submit multiple evaluations, rerun the same agent and set the same agent name |
Traces |
---|---|---|---|---|---|---|---|
1 |
SAB Self-Debug
Pareto optimal
|
o3 Medium (April 2025) | ✓ | 33.33% | $11.69 | 1 | Download |
2 | Claude-3.7 Sonnet High (February 2025) | ✓ | 30.39% | $11.74 | 1 | Download | |
3 | GPT-5 Medium (August 2025) | ✓ | 30.39% | $18.26 | 1 | Download | |
4 |
SAB Self-Debug
Pareto optimal
|
o4-mini Low (April 2025) | ✓ | 27.45% | $3.95 | 1 | Download |
5 | o4-mini High (April 2025) | ✓ | 27.45% | $11.18 | 1 | Download | |
6 | Claude Opus 4.1 (August 2025) | ✓ | 27.45% | $33.37 | 1 | Download | |
7 | Claude Opus 4.1 High (August 2025) | ✓ | 26.47% | $33.75 | 1 | Download | |
8 | GPT-4.1 (April 2025) | ✓ | 24.51% | $7.42 | 1 | Download | |
9 | DeepSeek R1 (January 2025) | ✓ | 23.53% | $18.24 | 1 | Download | |
10 | Claude-3.7 Sonnet (February 2025) | ✓ | 22.55% | $7.12 | 1 | Download | |
11 | o4-mini High (April 2025) | ✓ | 21.57% | $76.30 | 1 | Download | |
12 | o4-mini Low (April 2025) | ✓ | 19.61% | $77.32 | 1 | Download | |
13 | Claude-3.7 Sonnet High (February 2025) | ✓ | 17.65% | $48.28 | 1 | Download | |
14 | DeepSeek V3 (March 2025) | ✓ | 15.69% | $2.09 | 1 | Download | |
15 |
SAB Self-Debug
Pareto optimal
|
Gemini 2.0 Flash (February 2025) | ✓ | 12.75% | $0.19 | 1 | Download |
16 | Claude-3.7 Sonnet (February 2025) | ✓ | 10.78% | $41.22 | 1 | Download | |
17 | o3 Medium (April 2025) | ✓ | 9.80% | $31.08 | 1 | Download | |
18 | GPT-4.1 (April 2025) | ✓ | 6.86% | $68.95 | 1 | Download | |
19 | DeepSeek V3 (March 2025) | ✓ | 0.98% | $55.73 | 1 | Download |
Accuracy vs. Cost Frontier for ScienceAgentBench
This plot shows the relationship between an agent's performance and its token cost. The Pareto frontier (dashed line) represents the current state-of-the-art trade-off. The error bars indicate min-max values across runs.
Total Completion Tokens Used per Agent
The bar chart shows the total completion tokens used by each agent, with the height of each bar representing the total number of completion tokens used across all tasks. Secondary models usually contribute a relatively small amount of tokens in comparison, and are used for RAG or image processing only.
Model Performance Over Time
Track how model accuracy has evolved over time since their release dates. Each point represents the best performance achieved by that model on ScienceAgentBench.
Token Pricing Configuration
Adjust token prices to see how they affect the total cost calculations in the leaderboard and plots.
Additional Resources
Getting Started
Want to evaluate your agent on ScienceAgentBench ? Follow our comprehensive guide to get started:
View DocumentationTask Details
Browse the complete list of ScienceAgentBench tasks, including problem descriptions and test cases:
View Tasks