Overview
OpenAI describes Codex as a coding agent that works alongside an IDE and can also take on larger workloads in the cloud. The product spans local workflows inside VS Code and remote execution for bigger jobs, while maintaining continuity between the two.
IDE integration and context
Codex can be added as a panel in VS Code to chat, edit, and preview changes. With context from opened files and selected code, prompts can be shorter, and outputs can be more relevant to the current selection. The panel placement is configurable within the editor.
On macOS, the OpenAI Codex extension supports connecting the ChatGPT macOS app to VS Code. When enabled via “Work with VS Code” in the ChatGPT app, ChatGPT can answer questions or make simple edits inside the editor.
For IDE coverage beyond VS Code, the Codex CLI can run in an IDE’s terminal, and the VS Code extension is compatible with most VS Code forks.
Cloud delegation and continuity
Larger tasks can be delegated to Codex in the cloud. Progress can be tracked and results reviewed without leaving the IDE. For finishing touches, cloud tasks can be opened locally, and Codex keeps the context consistent between cloud and local execution.
Access, clients, and setup
Codex works with ChatGPT Plus, Pro, Team, Edu, and Enterprise plans. Sign-in requires an active ChatGPT subscription, with upgrades managed in ChatGPT account settings. OpenAI notes that logging into the Codex CLI and Codex IDE with ChatGPT Enterprise or Edu accounts is still rolling out and is currently not available.
Codex is available through multiple entry points:
- Codex CLI: https://developers.openai.com/codex/cli
- Codex IDE extension: https://developers.openai.com/codex/ide
- Codex web: https://chatgpt.com/codex (requires connecting ChatGPT to a GitHub account)
Documentation and setup instructions are published at https://developers.openai.com/codex/ and https://developers.openai.com/codex/ide. Enterprise setup guidance is available at https://developers.openai.com/codex/enterprise.
For users previously accessing the Codex CLI via API key, OpenAI instructs updating the package, then running “codex logout” followed by “codex” to switch to subscription-based access.
Plans and usage limits
OpenAI states that Codex usage limits vary by plan and by where tasks run (local vs. cloud). The number of messages per window also depends on the size and complexity of work.
For Plus, Team, Enterprise, and Edu plans:
- Local tasks: Average users can send 30–150 messages every 5 hours, with a weekly limit.
- Cloud tasks: Described as “generous limits for a limited time.”
- The plan description labels these tiers as “best for developers looking to power a few focused coding sessions each week.”
For Pro:
- Local tasks: Average users can send 300–1,500 messages every 5 hours, with a weekly limit.
- Cloud tasks: Also listed as “generous limits for a limited time.”
- The plan description states “best for developers looking to power their full workday across multiple projects.”
For Enterprise and Edu plans using flexible pricing, Codex usage draws down from a workspace’s shared credit pool. OpenAI notes that for a limited time, Codex credits are waived for flexible Enterprise and Edu pricing plans.
When a usage limit is reached, Codex access is paused until the usage window resets. To obtain more capacity, it is possible to use an API key to run additional local tasks, with API credits billed accordingly.
Enterprise controls and compliance
OpenAI publishes an Enterprise Admin Guide at https://developers.openai.com/codex/enterprise. RBAC is supported, allowing access to be granted to specific user roles; setup guidance is available at https://help.openai.com/en/articles/11750701-rbac.
For compliance and reporting, Codex usage on the web or when delegated to the cloud is available in the Compliance API at https://chatgpt.com/admin/api-reference#tag/Codex-Tasks. Usage in local environments is not available through this API. OpenAI states that Codex is compliant with Data Retention & Residency policies.
Data use and privacy
OpenAI’s FAQ states the following:
- For Team, Enterprise, and Edu: By default, OpenAI does not use inputs or outputs from ChatGPT Team, ChatGPT Enterprise, or the API to improve models. API organization owners can opt in to share API data. This setting is not available to certain organizations, including Enterprise and customers with Zero Data Retention enabled. Details: https://help.openai.com/en/articles/10306912-sharing-feedback-evaluation-and-fine-tuning-data-and-api-inputs-and-outputs-with-openai.
- For Pro and Plus: Conversations may be used to improve models unless training is turned off in ChatGPT data controls at https://chatgpt.com/#settings/DataControls. Additional information: https://help.openai.com/en/articles/5722486-how-your-data-is-used-to-improve-model-performance.
Models and customization
The Codex CLI and IDE extension automatically select a default model. The FAQ currently recommends GPT-5 with medium or high reasoning, and this choice can be customized in the app.
Documentation
Further product and setup information is available at:
- Codex IDE docs: https://developers.openai.com/codex/ide
- Codex CLI docs: https://developers.openai.com/codex/cli
- Codex documentation hub: https://developers.openai.com/codex/
TL;DR
- VS Code panel for chat, edits, and previews with file/selection context
- Cloud delegation with progress tracking and context continuity when opened locally
- Works with ChatGPT Plus, Pro, Team, Edu, Enterprise; Enterprise/Edu CLI/IDE sign-in is still rolling out
- Codex web requires GitHub connection; clients available via CLI, IDE extension, and web
- Usage limits vary by plan; local ranges from 30–150 (Plus/Team/Enterprise/Edu) to 300–1,500 (Pro) messages every 5 hours; cloud listed as “generous limits for a limited time”
- Compliance API covers web/cloud usage; local usage is not exposed there
- Business tiers default to no training on data; Pro/Plus may contribute unless disabled
- Model selection is automatic; FAQ recommends GPT-5 with medium or high reasoning