OpenAI

gpt-5.2-2025-12-11

proprietaryMultimodal
Benchmark Average (0–100)
77.4%
Average of non-null benchmark scores across all evaluated tasks.

BENCHMARK SCORES

GPQA92.4%

Graduate-Level Google-Proof Q&A. PhD-level multiple-choice questions in chemistry, biology, and physics. Scored by accuracy.

AIME 2025100.0%

AIME 2025 is a mathematics competition benchmark testing advanced problem-solving.

SWE-bench Verified80.0%

SWE-bench Verified evaluates models on real-world software engineering tasks from GitHub issues.

MMMLU89.6%

Multilingual MMLU tests knowledge across many languages and subject areas.

BrowseComp65.8%

BrowseComp tests web browsing comprehension, measuring ability to find and synthesize information from websites.

HLE34.5%

Humanity's Last Exam (HLE) is a multi-modal benchmark testing frontier knowledge across mathematics, humanities, and natural sciences with 2,500 expert-level questions.

MMMU-Pro79.5%

MMMU-Pro is a more challenging version of MMMU with harder multimodal understanding tasks.

OTHER BENCHMARKS

ToolAthalon
46.3%
ARC-AGI v2
52.9%
CharXIV-R
82.1%
ScreenSpot Pro
86.3%
MCP Atlas
60.6%
FrontierMath
40.3%

MODEL INFO

Organization
OpenAI
Context Window
400,000
Release Date
2025-12-11
Knowledge Cutoff
2025-08-25
Parameters
License
proprietary
Input Price ($/M)
$1.7500
Output Price ($/M)
$14.0000

METADATA

Announcement Date
2025-12-11
Organization Country
US
Canonical ID