xAI

grok-4-heavy

proprietaryMultimodal
Benchmark Average (0–100)
79.7%
Average of non-null benchmark scores across all evaluated tasks.

BENCHMARK SCORES

GPQA88.4%

Graduate-Level Google-Proof Q&A. PhD-level multiple-choice questions in chemistry, biology, and physics. Scored by accuracy.

AIME 2025100.0%

AIME 2025 is a mathematics competition benchmark testing advanced problem-solving.

HLE50.7%

Humanity's Last Exam (HLE) is a multi-modal benchmark testing frontier knowledge across mathematics, humanities, and natural sciences with 2,500 expert-level questions.

MODEL INFO

Organization
xAI
Context Window
Release Date
Knowledge Cutoff
2024-12-31
Parameters
License
proprietary

METADATA

Announcement Date
2025-07-09
Organization Country
US
Canonical ID