GAIA Benchmark

GAIA is a benchmark for General AI Assistants that requires a set of fundamental abilities such as reasoning, multi-modality handling, web browsing, and tool-use proficiency. It contains 450 questions with unambiguous answers, requiring different levels of tooling and autonomy to solve. It is divided in 3 levels, where level 1 should be breakable by very good LLMs, and level 3 indicate a strong jump in model capabilities. We evaluate on the public validation set of 165 questions.

Paper: GAIA: a benchmark for General AI Assistants (Mialon et al., 2023)

450
Total Questions
165
Questions in Public Validation Set
23
Agents Evaluated

Key Features of GAIA

Multi-Level Evaluation

Tasks are organized into three difficulty levels, testing increasingly complex cognitive abilities.

Diverse Task Types

Covers a wide range of tasks from basic reasoning to complex problem-solving and creative generation.

GAIA Leaderboard

Rank Agent Primary Model Verified Accuracy Level 1 Level 2 Level 3 Cost (USD) Runs Traces
1
HAL Generalist Agent Pareto optimal
Claude Opus 4 High (May 2025) 64.85% 71.70% 67.44% 42.31% $665.89 1 Download
2
HAL Generalist Agent Pareto optimal
Claude-3.7 Sonnet High (February 2025) 64.24% 67.92% 63.95% 57.69% $122.49 1 Download
3 GPT-5 Medium (August 2025) 62.80% 73.58% 62.79% 38.46% $359.83 1 Download
4
HAL Generalist Agent Pareto optimal
o4-mini Low (April 2025) 58.18% 71.70% 51.16% 53.85% $73.26 1 Download
5 Claude Opus 4 (May 2025) 57.58% 66.04% 56.98% 42.31% $1686.07 1 Download
6 Claude-3.7 Sonnet (February 2025) 56.36% 62.26% 55.81% 46.15% $130.68 1 Download
7 o4-mini High (April 2025) 55.76% 69.81% 51.16% 42.31% $184.87 1 Download
8
HAL Generalist Agent Pareto optimal
o4-mini High (April 2025) 54.55% 60.38% 53.49% 46.15% $59.39 1 Download
9 GPT-4.1 (April 2025) 50.30% 58.49% 50.00% 34.62% $109.88 1 Download
10 GPT-4.1 (April 2025) 49.70% 52.83% 55.81% 23.08% $74.19 1 Download
11 o4-mini Low (April 2025) 47.88% 58.49% 47.67% 26.92% $80.80 1 Download
12 Claude-3.7 Sonnet (February 2025) 36.97% 39.62% 39.53% 23.08% $415.15 1 Download
13
HAL Generalist Agent Pareto optimal
DeepSeek V3 36.36% 50.94% 38.37% 0.00% $4.97 1 Download
14 Claude-3.7 Sonnet High (February 2025) 35.76% 45.28% 33.72% 23.08% $113.65 1 Download
15 o3 Medium (April 2025) 32.73% 39.62% 31.40% 23.08% $136.39 1 Download
16 Gemini 2.0 Flash 32.73% 43.40% 32.56% 11.54% $7.80 1 Download
17 Claude Opus 4 (May 2025) 30.30% 33.96% 27.91% 30.77% $272.76 1 Download
18 DeepSeek R1 30.30% 43.40% 27.91% 11.54% $5.47 1 Download
19 Claude Opus 4.1 (August 2025) 28.48% 41.51% 24.42% 15.38% $1306.85 1 Download
20 DeepSeek V3 28.48% 35.85% 30.23% 7.69% $13.19 1 Download
21 Claude Opus 4.1 High (August 2025) 25.45% 35.85% 23.26% 11.54% $1473.64 1 Download
22 DeepSeek R1 24.85% 30.19% 24.42% 15.38% $11.10 1 Download
23 Gemini 2.0 Flash 19.39% 24.53% 19.77% 7.69% $18.82 1 Download

Accuracy vs. Cost Frontier for GAIA

This plot shows the relationship between an agent's performance and its token cost. The Pareto frontier (dashed line) represents the current state-of-the-art trade-off. The error bars indicate min-max values across runs.

Heatmap for GAIA

The heatmap visualizes success rates across tasks and agents. Colorscale shows the fraction of times a task was solved across reruns of the same agent. The "any agent" performance indicates the level of saturation of the benchmark and gives a sense of overall progress.

Total Completion Tokens Used per Agent

The bar chart shows the total completion tokens used by each agent, with the height of each bar representing the total number of completion tokens used across all tasks. Secondary models usually contribute a relatively small amount of tokens in comparison, and are used for RAG or image processing only.

Model Performance Over Time

Track how model accuracy has evolved over time since their release dates. Each point represents the best performance achieved by that model on GAIA.

Token Pricing Configuration

Adjust token prices to see how they affect the total cost calculations in the leaderboard and plots.

Claude Opus 4 (May 2025)

Active
$
/1M tokens
$
/1M tokens

GPT-4o (November 2024)

Active
$
/1M tokens
$
/1M tokens

Claude-3.7 Sonnet (February 2025)

Active
$
/1M tokens
$
/1M tokens

DeepSeek R1

Active
$
/1M tokens
$
/1M tokens

DeepSeek V3

Active
$
/1M tokens
$
/1M tokens

GPT-4.1 (April 2025)

Active
$
/1M tokens
$
/1M tokens

Gemini 2.0 Flash

Active
$
/1M tokens
$
/1M tokens

o4-mini Medium (April 2025)

Active
$
/1M tokens
$
/1M tokens

Claude Opus 4.1 (August 2025)

Active
$
/1M tokens
$
/1M tokens

GPT-5 Medium (August 2025)

Active
$
/1M tokens
$
/1M tokens

o3 Medium (April 2025)

Active
$
/1M tokens
$
/1M tokens

Additional Resources

Getting Started

Want to evaluate your agent on GAIA? Follow our comprehensive guide to get started:

View Documentation

Task Details

Browse the complete list of GAIA tasks and their requirements:

View Tasks