USACO

The USACO benchmark evaluates AI agents on competitive programming problems from the USA Computing Olympiad. It consists of 307 problems, complete with exhaustive test cases, problem analyses, and difficulty labels.

Paper: Can Language Models Solve Olympiad Programming? (Shi et al., 2024)

307
Total Tasks
13
Agents Evaluated

Key Features of USACO

Difficulty Levels

Tasks span Bronze to Platinum difficulty levels, requiring knowledge of data structures and algorithms.

Real Competition Tasks

Problems are sourced from actual USACO competitions, consisting of problems challenging for human programmers.

USACO Leaderboard

Rank Agent Primary Model Verified Accuracy Cost (USD) Runs Traces
1 GPT-5 Medium (August 2025) 69.71% $64.13 1 Download
2 o4-mini High (April 2025) 57.98% $44.04 1 Download
3 Claude Opus 4.1 High (August 2025) 51.47% $267.72 1 Download
4 Claude Opus 4.1 (August 2025) 48.21% $276.19 1 Download
5 o3 Medium (April 2025) 46.25% $57.30 1 Download
6 GPT-4.1 (April 2025) 44.95% $28.10 1 Download
7 DeepSeek V3 39.09% $2.78 1 Download
8 DeepSeek R1 38.11% $8.18 1 Download
9 o4-mini Low (April 2025) 30.94% $21.14 1 Download
10 Claude-3.7 Sonnet (February 2025) 29.32% $38.70 1 Download
11 Gemini 2.0 Flash 27.04% $1.46 1 Download
12 Claude-3.7 Sonnet High (February 2025) 26.71% $56.43 1 Download
13 GPT-4.1 (April 2025) 25.41% $197.33 1 Download

Accuracy vs. Cost Frontier for USACO

This plot shows the relationship between an agent's performance and its token cost. The Pareto frontier (dashed line) represents the current state-of-the-art trade-off. The error bars indicate min-max values across runs.

Heatmap for USACO

The heatmap visualizes success rates across tasks and agents. Colorscale shows the fraction of times a task was solved across reruns of the same agent. The "any agent" performance indicates the level of saturation of the benchmark and gives a sense of overall progress.

Total Completion Tokens Used per Agent

The bar chart shows the total completion tokens used by each agent, with the height of each bar representing the total number of completion tokens used across all tasks. Secondary models usually contribute a relatively small amount of tokens in comparison, and are used for RAG or image processing only.

Model Performance Over Time

Track how model accuracy has evolved over time since their release dates. Each point represents the best performance achieved by that model on USACO.

Token Pricing Configuration

Adjust token prices to see how they affect the total cost calculations in the leaderboard and plots.

GPT-4.1 (April 2025)

Active
$
/1M tokens
$
/1M tokens

Claude Opus 4.1 (August 2025)

Active
$
/1M tokens
$
/1M tokens

Claude-3.7 Sonnet (February 2025)

Active
$
/1M tokens
$
/1M tokens

DeepSeek R1

Active
$
/1M tokens
$
/1M tokens

DeepSeek V3

Active
$
/1M tokens
$
/1M tokens

GPT-5 Medium (August 2025)

Active
$
/1M tokens
$
/1M tokens

Gemini 2.0 Flash

Active
$
/1M tokens
$
/1M tokens

o3 Medium (April 2025)

Active
$
/1M tokens
$
/1M tokens

o4-mini Medium (April 2025)

Active
$
/1M tokens
$
/1M tokens

Additional Resources

Getting Started

Want to evaluate your agent on USACO? Follow our comprehensive guide to get started:

View Documentation

Task Details

Browse the complete list of USACO tasks, including difficulty levels and categories:

View Tasks