CORE-Bench Hard

CORE-Bench evaluates the ability of agents to computationally reproduce the results of published scientific papers. In CORE-Bench Hard, the agent is only given the codebase of the paper and must install all libraries and dependencies, run the code, and read through the output and figures to answer questions about the paper. This level is most akin to fully reproducing a paper and is the most realistic and challenging level.

Paper: CORE-Bench: Fostering the Credibility of Published Research Through a Computational Reproducibility Agent Benchmark (Siegel et al., 2024)

90
Scientific Papers
45
Papers in Public Test Set
45
Evaluations (2 scaffolds, 25 models)

Key Features of CORE-Bench

Scientific Papers

Tasks are based on actual published scientific papers, requiring agents to understand and reproduce real research.

Comprehensive Evaluation

Tests multiple capabilities: code understanding, dependency management, and scientific result interpretation.

Three Difficulty Levels

Tasks range from interpreting results (Easy) to full paper reproduction (Hard), allowing granular capability assessment.

CORE-Bench Hard Leaderboard

Costs are currently calculated without accounting for caching benefits.

Update: Running Opus 4.5 with an updated scaffold that uses Claude Code drastically outperforms the CORE-Agent scaffold we used, especially after fixing a few grading errors via manual scoring. We have now declared CORE-Bench solved.

Rank Scaffold Primary Model Verified Accuracy Cost (USD) Runs Traces
1
CORE-Agent Pareto optimal
Claude Opus 4.1 (August 2025) 51.11% $412.42 1 Download
2
CORE-Agent Pareto optimal
Claude Sonnet 4.5 High (September 2025) 44.44% $92.34 1 Download
3 Claude Opus 4.5 High (November 2025) 42.22% $152.66 1 Download
4 Claude Opus 4.5 (November 2025) 42.22% $168.99 1 Download
5 Claude Opus 4.1 High (August 2025) 42.22% $509.95 1 Download
6 Gemini 3 Pro Preview High (November 2025) 40.00% $86.60 1 Download
7 Claude-3.7 Sonnet High (February 2025) 37.78% $66.15 1 Download
8 Claude Sonnet 4.5 (September 2025) 37.78% $97.15 1 Download
9
HAL Generalist Agent Pareto optimal
o4-mini High (April 2025) 35.56% $45.37 1 Download
10 Claude-3.7 Sonnet (February 2025) 35.56% $73.04 1 Download
11 Gemini 3 Pro Preview High (November 2025) 35.56% $101.27 1 Download
12 Claude Opus 4.1 (August 2025) 35.56% $375.11 1 Download
13 Claude Sonnet 4.5 (September 2025) 33.33% $85.19 1 Download
14 Claude Sonnet 4 High (May 2025) 33.33% $100.48 1 Download
15 GPT-4.1 (April 2025) 33.33% $107.36 1 Download
16 Claude Opus 4.5 (November 2025) 33.33% $127.41 1 Download
17 Claude Opus 4.1 High (August 2025) 33.33% $358.47 1 Download
18 Claude-3.7 Sonnet (February 2025) 31.11% $56.64 1 Download
19 Claude Opus 4.5 High (November 2025) 31.11% $112.38 1 Download
20 Claude Sonnet 4 (May 2025) 28.89% $50.27 1 Download
21 Claude Sonnet 4.5 High (September 2025) 28.89% $87.77 1 Download
22 GPT-5 Medium (August 2025) 26.67% $31.76 1 Download
23 o4-mini High (April 2025) 26.67% $61.35 1 Download
24 Claude-3.7 Sonnet High (February 2025) 24.44% $72.47 1 Download
25 o3 Medium (April 2025) 24.44% $120.47 1 Download
26 GPT-4.1 (April 2025) 22.22% $58.32 1 Download
27 o3 Medium (April 2025) 22.22% $88.34 1 Download
28 Gemini 2.5 Pro Preview (March 2025) 22.22% $182.34 1 Download
29
CORE-Agent Pareto optimal
DeepSeek V3.1 (August 2025) 20.00% $12.55 1 Download
30 DeepSeek V3 (March 2025) 17.78% $25.26 1 Download
31 o4-mini Low (April 2025) 17.78% $31.79 1 Download
32 o4-mini Low (April 2025) 15.56% $22.50 1 Download
33 GPT-OSS-120B (August 2025) 11.11% $4.21 1 Download
34 GPT-OSS-120B High (August 2025) 11.11% $4.21 1 Download
35 Gemini 2.0 Flash (February 2025) 11.11% $12.46 1 Download
36 GPT-5 Medium (August 2025) 11.11% $29.75 1 Download
37 Claude Haiku 4.5 (October 2025) 11.11% $43.93 1 Download
38
HAL Generalist Agent Pareto optimal
GPT-OSS-120B High (August 2025) 8.89% $2.05 1 Download
39 GPT-OSS-120B (August 2025) 8.89% $2.79 1 Download
40 DeepSeek V3 (March 2025) 8.89% $4.69 1 Download
41 DeepSeek R1 (May 2025) 8.89% $7.77 1 Download
42 DeepSeek R1 (January 2025) 6.67% (-2.22/+2.22) $81.11 (-46.45/+46.45) 2 Download
43 DeepSeek R1 (January 2025) 4.45% (-2.22/+2.22) $24.95 (-11.07/+22.15) 2 Download
44 Gemini 2.0 Flash (February 2025) 4.44% $7.06 1 Download
45 Gemini 2.5 Pro Preview (March 2025) 4.44% $30.38 1 Download

Accuracy vs. Cost Frontier for CORE-Bench

This plot shows the relationship between an agent's performance and its token cost. The Pareto frontier (dashed line) represents the current state-of-the-art trade-off. The error bars indicate min-max values across runs.

Heatmap for CORE-Bench

The heatmap visualizes success rates across tasks and agents. Colorscale shows the fraction of times a task was solved across reruns of the same agent. The "any agent" performance indicates the level of saturation of the benchmark and gives a sense of overall progress.

Total Completion Tokens Used per Agent

The bar chart shows the total completion tokens used by each agent, with the height of each bar representing the total number of completion tokens used across all tasks. Secondary models usually contribute a relatively small amount of tokens in comparison, and are used for RAG or image processing only.

Model Performance Over Time

Track how model accuracy has evolved over time since their release dates. Each point represents the best performance achieved by that model on CORE-Bench Hard.

Token Pricing Configuration

Adjust token prices to see how they affect the total cost calculations in the leaderboard and plots.

Claude Haiku 4.5 (October 2025)

Active
$
/1M tokens
$
/1M tokens

Claude Opus 4.1 (August 2025)

Active
$
/1M tokens
$
/1M tokens

GPT-4o (November 2024)

Active
$
/1M tokens
$
/1M tokens

Claude Opus 4.5 (November 2025)

Active
$
/1M tokens
$
/1M tokens

Claude Sonnet 4 (May 2025)

Active
$
/1M tokens
$
/1M tokens

Claude Sonnet 4.5 (September 2025)

Active
$
/1M tokens
$
/1M tokens

Claude-3.7 Sonnet (February 2025)

Active
$
/1M tokens
$
/1M tokens

DeepSeek R1 (January 2025)

Active
$
/1M tokens
$
/1M tokens

DeepSeek V3 (March 2025)

Active
$
/1M tokens
$
/1M tokens

DeepSeek V3.1 (August 2025)

Active
$
/1M tokens
$
/1M tokens

GPT-4.1 (April 2025)

Active
$
/1M tokens
$
/1M tokens

GPT-5 Medium (August 2025)

Active
$
/1M tokens
$
/1M tokens

GPT-OSS-120B (August 2025)

Active
$
/1M tokens
$
/1M tokens

Gemini 2.0 Flash (February 2025)

Active
$
/1M tokens
$
/1M tokens

Gemini 2.5 Pro Preview (March 2025)

Active
$
/1M tokens
$
/1M tokens

Gemini 3 Pro Preview (November 2025)

Active
$
/1M tokens
$
/1M tokens

o3 Medium (April 2025)

Active
$
/1M tokens
$
/1M tokens

o4-mini Medium (April 2025)

Active
$
/1M tokens
$
/1M tokens

DeepSeek R1 (May 2025)

Active
$
/1M tokens
$
/1M tokens

Additional Resources

Getting Started

Want to evaluate your agent on CORE-Bench? Follow our comprehensive guide to get started:

View Documentation

Task Details

Browse the complete list of CORE-Bench tasks and their requirements:

View Tasks