CORE-Bench Easy
CORE-Bench evaluates the ability of agents to computationally reproduce the results of published scientific papers. In CORE-Bench Easy, the agent is already given the output of the code and must answer questions about the output without running any code. To answer questions, agents must navigate through the terminal output as well as files and figures generated by the code.
Key Features of CORE-Bench
Scientific Papers
Tasks are based on actual published scientific papers, requiring agents to understand and reproduce real research.
Comprehensive Evaluation
Tests multiple capabilities: code understanding, dependency management, and scientific result interpretation.
Three Difficulty Levels
Tasks range from interpreting results (Easy) to full paper reproduction (Hard), allowing granular capability assessment.
CORE-Bench Easy Leaderboard
Rank | Agent | Models |
Verified
Verified Results
Results have been reproduced by the HAL team |
Accuracy
Accuracy
Confidence intervals show the min-max values across runs for those agents where multiple runs are available |
Cost (USD)
Total Cost
Total API cost for running the agent on all tasks. Confidence intervals show the min-max values across runs for those agents where multiple runs are available |
Runs
Number of Runs
The number of runs for this agent submitted to the leaderboard. To submit multiple evaluations, rerun the same agent and set the same agent name |
Traces |
---|---|---|---|---|---|---|---|
1 | gpt-4o-2024-05-13 | ✓ | 58.52% (-0.74/+1.48) | $28.83 (-4.46/+3.83) | 3 | Download | |
2 | gpt-4o-mini-2024-07-18 | ✓ | 42.22% (-6.66/+6.67) | $2.00 (-1.44/+2.77) | 3 | Download | |
3 | gpt-4o-2024-05-13 | ✓ | 33.33% | $24.66 | 1 | Download | |
4 | gpt-4o-mini-2024-07-18 | ✓ | 6.67% | $0.33 | 1 | Download |
Accuracy vs. Cost Frontier for CORE-Bench
This plot shows the relationship between an agent's performance and its token cost. The Pareto frontier (dashed line) represents the current state-of-the-art trade-off. The error bars indicate min-max values across runs.
Heatmap for CORE-Bench
The heatmap visualizes success rates across tasks and agents. Colorscale shows the fraction of times a task was solved across reruns of the same agent. The "any agent" performance indicates the level of saturation of the benchmark and gives a sense of overall progress.
Additional Resources
Getting Started
Want to evaluate your agent on CORE-Bench? Follow our comprehensive guide to get started:
View Documentation