CORE-Bench Hard

CORE-Bench evaluates the ability of agents to computationally reproduce the results of published scientific papers. In CORE-Bench Hard, the agent is only given the codebase of the paper and must install all libraries and dependencies, run the code, and read through the output and figures to answer questions about the paper. This level is most akin to fully reproducing a paper and is the most realistic and challenging level.

Paper: CORE-Bench: Fostering the Credibility of Published Research Through a Computational Reproducibility Agent Benchmark (Siegel et al., 2024)

90
Scientific Papers
45
Papers in Public Test Set
7
Agents Evaluated

Key Features of CORE-Bench

Scientific Papers

Tasks are based on actual published scientific papers, requiring agents to understand and reproduce real research.

Comprehensive Evaluation

Tests multiple capabilities: code understanding, dependency management, and scientific result interpretation.

Three Difficulty Levels

Tasks range from interpreting results (Easy) to full paper reproduction (Hard), allowing granular capability assessment.

CORE-Bench Hard Leaderboard

Rank Agent Models Verified Accuracy Cost (USD) Runs Traces
1 claude-3-5-sonnet-20241022 37.78% $103.46 1 Download
2 o1-2024-12-17 med. 33.33% $664.33 1 Download
3 o1-mini-2024-09-12 25.55% (-3.33/+3.34) $92.11 (-0.00/+0.00) 2 Download
4 gpt-4o-2024-05-13 19.45% (-1.66/+0.55) $134.43 (-5.20/+6.03) 4 Download
5 gpt-4o-mini-2024-07-18 14.45% (-1.11/+1.12) $30.29 (-6.52/+7.20) 4 Download
6 gpt-4o-2024-05-13 4.44% $139.84 1 Download
7 gpt-4o-mini-2024-07-18 0.00% $21.19 1 Download

Accuracy vs. Cost Frontier for CORE-Bench

This plot shows the relationship between an agent's performance and its token cost. The Pareto frontier (dashed line) represents the current state-of-the-art trade-off. The error bars indicate min-max values across runs.

Heatmap for CORE-Bench

The heatmap visualizes success rates across tasks and agents. Colorscale shows the fraction of times a task was solved across reruns of the same agent. The "any agent" performance indicates the level of saturation of the benchmark and gives a sense of overall progress.

Additional Resources

Getting Started

Want to evaluate your agent on CORE-Bench? Follow our comprehensive guide to get started:

View Documentation

Task Details

Browse the complete list of CORE-Bench tasks and their requirements:

View Tasks