Introducing GPT‑5.2‑Codex: agentic coding tuned for long work and defensive security
OpenAI released GPT‑5.2‑Codex on December 18, 2025, a version of GPT‑5.2 further optimized for agentic coding in Codex. The model is available today across Codex surfaces for paid ChatGPT users, with API access planned in the coming weeks. OpenAI is also piloting an invite‑only trusted access program for vetted security professionals and organizations focused on defensive cybersecurity work (express interest).
What’s new for engineering workflows
GPT‑5.2‑Codex builds on prior GPT‑5.x releases and Codex agentic capabilities to target longer, more complex development tasks. Key technical improvements highlighted by OpenAI include:
- Long‑horizon context compaction that preserves project context over extended sessions, enabling continuity across iterative attempts and shifting plans.
- Improved handling of large code changes such as refactors, migrations, and multi‑step feature builds.
- Stronger native Windows support, with more reliable terminal and environment interactions on Windows hosts.
- Vision integration that interprets screenshots, diagrams, charts, and UI mocks to accelerate prototype generation and translation to working code.
- Improved factuality and tool calling, designed to make agentic workflows in terminal environments more dependable and token‑efficient.
OpenAI reports that GPT‑5.2‑Codex achieved state‑of‑the‑art results on SWE‑Bench Pro and Terminal‑Bench 2.0, benchmarks focused on agentic performance across realistic terminal tasks such as compiling, training, and server setup.
Cybersecurity capabilities and a defensive use case
GPT‑5.2‑Codex exhibits notably stronger cybersecurity performance compared with earlier Codex models. OpenAI points to gains across evaluations including the Professional Capture‑the‑Flag (CTF) metric and broader cyber capability tests. While the model has not reached a ‘High’ level of cyber capability under OpenAI’s Preparedness Framework, the company is planning deployments as if future models could reach that threshold and has added additional safeguards described in the system card.
A concrete defensive example cited by OpenAI involved a security researcher using earlier Codex tooling to reproduce, probe, and ultimately help disclose multiple vulnerabilities affecting React Server Components. That effort combined expert guidance, environment setup, fuzzing, and verification to produce responsible disclosures; the React blog post and related CVE are linked in OpenAI’s writeup (React disclosure, earlier React2Shell writeup, CVE‑2025‑55182).
Deployment, access, and safeguards
OpenAI is rolling GPT‑5.2‑Codex out gradually and pairing access with product safeguards and access controls. Paid ChatGPT users receive access now; API access is expected in the coming weeks. In parallel, an invite‑only pilot will grant vetted security professionals and qualifying organizations access to more permissive capabilities for defensive uses such as authorized red‑teaming and vulnerability research. Interested parties can express interest via the pilot form linked above.
Context for practitioners
For teams focused on large repositories, long‑running projects, or defensive security workflows, GPT‑5.2‑Codex aims to reduce friction in multi‑step engineering and analysis tasks through sustained context, improved tool reliability, and vision‑enabled inputs. OpenAI emphasizes pairing capability advances with stronger safeguards and community collaboration as the company prepares for continued capability growth.
Original source: https://openai.com/index/introducing-gpt-5-2-codex/
