OpenAI
gpt-5-mini-2025-08-07
proprietaryMultimodal
Benchmark Average (0–100)
63.4%
Average of non-null benchmark scores across all evaluated tasks.
BENCHMARK SCORES
GPQA82.3%
Graduate-Level Google-Proof Q&A. PhD-level multiple-choice questions in chemistry, biology, and physics. Scored by accuracy.
AIME 202591.1%
AIME 2025 is a mathematics competition benchmark testing advanced problem-solving.
HLE16.7%
Humanity's Last Exam (HLE) is a multi-modal benchmark testing frontier knowledge across mathematics, humanities, and natural sciences with 2,500 expert-level questions.
OTHER BENCHMARKS
FrontierMath
22.1%
MODEL INFO
Organization
OpenAI
Context Window
400,000
Release Date
2025-08-07
Knowledge Cutoff
2024-05-30
Parameters
—
License
proprietary
Input Price ($/M)
$0.2500
Output Price ($/M)
$2.0000
METADATA
Announcement Date
2025-08-07
Organization Country
US
Canonical ID
—