OpenAI
gpt-5.4
proprietaryMultimodal
Benchmark Average (0–100)
74.1%
Average of non-null benchmark scores across all evaluated tasks.
BENCHMARK SCORES
GPQA92.8%
Graduate-Level Google-Proof Q&A. PhD-level multiple-choice questions in chemistry, biology, and physics. Scored by accuracy.
BrowseComp82.7%
BrowseComp tests web browsing comprehension, measuring ability to find and synthesize information from websites.
HLE39.8%
Humanity's Last Exam (HLE) is a multi-modal benchmark testing frontier knowledge across mathematics, humanities, and natural sciences with 2,500 expert-level questions.
MMMU-Pro81.2%
MMMU-Pro is a more challenging version of MMMU with harder multimodal understanding tasks.
OTHER BENCHMARKS
ToolAthalon
54.6%
ARC-AGI v2
73.3%
MCP Atlas
67.2%
FrontierMath
47.6%
MODEL INFO
Organization
OpenAI
Context Window
1,000,000
Release Date
2026-03-05
Knowledge Cutoff
—
Parameters
—
License
proprietary
Input Price ($/M)
$2.5000
Output Price ($/M)
$15.0000
METADATA
Announcement Date
2026-03-05
Organization Country
US
Canonical ID
—