SWE-bench Verified Mini

SWE-bench Verified (Mini) is a random subset of 50 tasks of the original SWE-bench Verified. It is a light-weight version of the original SWE-bench Verified and is thus cheaper to evaluate.

Paper: SWE-bench: Can Language Models Resolve Real-World GitHub Issues? (Jimenez et al., 2023)
OpenAI Blog: SWE-bench Verified: Introducing SWE-bench Verified

Note: This subset of the original SWE-bench Verified contains different tasks then this recently released version with the same name. We are working on reconciling the two versions.

50
Verified Tasks
100%
Human Validated
27
Agents Evaluated

Key Features of SWE-bench Verified

Real-World Tasks

All tasks are sourced from actual GitHub issues, representing real software engineering problems.

Human Validation

Every task has been reviewed and validated by software engineers to be non-problematic.

Diverse Tasks

Tasks originate from PRs of 12 open-source Python repositories covering various domains.

SWE-Bench Verified Mini Leaderboard

Rank Agent Primary Model Verified Accuracy Cost (USD) Runs Traces
1 Claude Opus 4.1 (August 2025) 54.00% $1789.67 1 Download
2 Claude-3.7 Sonnet High (February 2025) 54.00% $388.88 1 Download
3 Claude Opus 4.1 High (August 2025) 54.00% $1599.90 1 Download
4
SWE-Agent Pareto optimal
o4-mini Low (April 2025) 54.00% $259.20 1 Download
5 o4-mini High (April 2025) 50.00% $248.46 1 Download
6 Claude Opus 4 (May 2025) 50.00% $1330.90 1 Download
7 Claude-3.7 Sonnet (February 2025) 50.00% $402.69 1 Download
8
SWE-Agent Pareto optimal
GPT-5 Medium (August 2025) 46.00% $162.93 1 Download
9 Claude Opus 4.1 High (August 2025) 46.00% $399.93 1 Download
10 o3 Medium (April 2025) 46.00% $483.43 1 Download
11 GPT-4.1 (April 2025) 44.00% $393.65 1 Download
12 Claude Opus 4.1 (August 2025) 42.00% $477.65 1 Download
13 Claude Opus 4 (May 2025) 34.00% $382.39 1 Download
14 Claude Opus 4 High (May 2025) 30.00% $403.42 1 Download
15 Claude-3.7 Sonnet (February 2025) 26.00% $117.43 1 Download
16 Gemini 2.0 Flash 24.00% $4.72 1 Download
17 Claude-3.7 Sonnet High (February 2025) 24.00% $72.98 1 Download
18
SWE-Agent Pareto optimal
DeepSeek V3 24.00% $2.10 1 Download
19 GPT-5 Medium (August 2025) 12.00% $57.58 1 Download
20 DeepSeek V3 10.00% $5.13 1 Download
21 o4-mini Low (April 2025) 6.00% $87.03 1 Download
22 DeepSeek R1 6.00% $10.32 1 Download
23 GPT-4.1 (April 2025) 2.00% $51.80 1 Download
24 Gemini 2.0 Flash 2.00% $7.33 1 Download
25 o4-mini High (April 2025) 2.00% $32.02 1 Download
26 o3 Medium (April 2025) 0.00% $585.71 1 Download
27
SWE-Agent Pareto optimal
DeepSeek R1 0.00% $0.41 1 Download

Accuracy vs. Cost Frontier for SWE-Bench Verified Mini

This plot shows the relationship between an agent's performance and its token cost. The Pareto frontier (dashed line) represents the current state-of-the-art trade-off. The error bars indicate min-max values across runs.

Heatmap for SWE-Bench Verified Mini

The heatmap visualizes success rates across tasks and agents. Colorscale shows the fraction of times a task was solved across reruns of the same agent. The "any agent" performance indicates the level of saturation of the benchmark and gives a sense of overall progress.

Total Completion Tokens Used per Agent

The bar chart shows the total completion tokens used by each agent, with the height of each bar representing the total number of completion tokens used across all tasks. Secondary models usually contribute a relatively small amount of tokens in comparison, and are used for RAG or image processing only.

Model Performance Over Time

Timeline showing model accuracy evolution over release dates. Each point represents the best performance achieved by that model on SWE-Bench Verified Mini.

Token Pricing Configuration

Adjust token prices to see how they affect the total cost calculations in the leaderboard and plots.

Claude Opus 4 (May 2025)

Active
$
/1M tokens
$
/1M tokens

Claude Opus 4.1 (August 2025)

Active
$
/1M tokens
$
/1M tokens

Claude-3.7 Sonnet (February 2025)

Active
$
/1M tokens
$
/1M tokens

GPT-4o (November 2024)

Active
$
/1M tokens
$
/1M tokens

DeepSeek R1

Active
$
/1M tokens
$
/1M tokens

DeepSeek V3

Active
$
/1M tokens
$
/1M tokens

GPT-4.1 (April 2025)

Active
$
/1M tokens
$
/1M tokens

GPT-5 Medium (August 2025)

Active
$
/1M tokens
$
/1M tokens

Gemini 2.0 Flash

Active
$
/1M tokens
$
/1M tokens

o3 Medium (April 2025)

Active
$
/1M tokens
$
/1M tokens

o4-mini Medium (April 2025)

Active
$
/1M tokens
$
/1M tokens

Additional Resources

Getting Started

Want to evaluate your agent on SWE-bench? Follow our comprehensive guide to get started:

View Documentation

Task Details

Browse the complete list of SWE-bench tasks, including problem descriptions and test cases:

View Tasks