Skip to content

Installation and Configuration Tips

You installed Codex, ran your first prompt, and got decent results. But your colleague who has been using it for a month somehow gets answers twice as fast, never hits approval prompts for basic commands, and always has the right MCP servers loaded. The difference is not talent — it is a well-tuned config.toml and a few environment tweaks you have not made yet.

  • A production-ready config.toml you can adapt to your workflow
  • Authentication patterns for personal use, CI, and team deployments
  • Environment preparation tips that prevent the agent from wasting tokens on setup
  • Profile configurations for switching between review, development, and automation modes

Codex is most powerful when you use multiple surfaces together. Install all three:

Terminal window
# CLI (requires Node.js 22+)
npm install -g @openai/codex
# App -- download from codex.openai.com
# IDE Extension -- install from your editor's marketplace (VS Code, Cursor, Windsurf)

The App and IDE Extension sync automatically when both are open in the same project. You get auto-context from your editor (open files, cursor position) in the App’s composer without any additional configuration.

Terminal window
# Check CLI version and auth status
codex --version
codex login status

If login status exits with code 0, you are authenticated. If not, run codex login to open the browser OAuth flow.

This is the config most individual developers should start with:

~/.codex/config.toml
model = "gpt-5.3-codex"
approval_policy = "on-failure"
sandbox_mode = "workspace-write"
file_opener = "cursor" # or vscode, windsurf
# Speed up repeated commands
[features]
shell_snapshot = true
# Clickable notifications when tasks complete
[tui]
notifications = ["agent-turn-complete", "approval-requested"]

For CI/CD and scripting, you want minimal friction and structured output:

# ~/.codex/config.toml -- CI profile
[profiles.ci]
model = "gpt-5.1-codex-mini"
approval_policy = "on-request"
sandbox_mode = "workspace-write"
hide_agent_reasoning = true
web_search = "disabled"

Use it with: codex exec --profile ci "Fix the failing test"

Add this line to the top of your config.toml to get autocompletion and validation in VS Code or Cursor with the Even Better TOML extension:

#:schema https://developers.openai.com/codex/config-schema.json

Now your editor highlights invalid keys and suggests valid values.

The default browser OAuth flow is the simplest:

Terminal window
codex login

This opens your browser, authenticates with ChatGPT, and stores credentials in your OS keychain.

For headless environments, pipe an API key:

Terminal window
printenv OPENAI_API_KEY | codex login --with-api-key

Or use device auth when you have a terminal but no browser:

Terminal window
codex login --device-auth
# Store credentials in a file instead of the keychain
cli_auth_credentials_store = "file"
# Or explicitly use the OS keychain
cli_auth_credentials_store = "keyring"

Use file in containers and CI runners where keychain access is unavailable.

Codex inherits your shell environment. Set up your development environment before launching Codex so it does not spend tokens probing what to activate:

Terminal window
# Activate your Python venv BEFORE launching Codex
source .venv/bin/activate
# Start required daemons
docker compose up -d postgres redis
# Export variables Codex will need
export DATABASE_URL="postgresql://localhost:5432/myapp"
# Now launch Codex
codex

Control which environment variables Codex can see to avoid leaking secrets:

[shell_environment_policy]
inherit = "core"
exclude = ["AWS_*", "AZURE_*", "GITHUB_TOKEN"]
set = { NODE_ENV = "development" }

This keeps PATH and HOME but strips cloud credentials. The set table injects variables into every subprocess Codex spawns.

Tab-completion for all Codex commands and flags:

Terminal window
# Zsh (add to ~/.zshrc, after compinit)
eval "$(codex completion zsh)"
# Bash (add to ~/.bashrc)
eval "$(codex completion bash)"
# Fish
codex completion fish | source

Now codex ex<TAB> expands to codex exec and all flags autocomplete.

~/.codex/config.toml
model = "gpt-5.3-codex"
[profiles.review]
model = "gpt-5.3-codex"
model_reasoning_effort = "high"
approval_policy = "never"
review_model = "gpt-5.3-codex"
[profiles.quick]
model = "gpt-5.1-codex-mini"
approval_policy = "on-request"
[profiles.oss]
model_provider = "ollama"

Switch on the fly:

Terminal window
codex --profile review # Deep analysis mode
codex --profile quick # Fast iteration mode
codex --oss # Local model via Ollama
profile = "review" # Always starts in review mode unless overridden
Terminal window
# Enable shell snapshots for faster repeated commands
codex features enable shell_snapshot
# Enable unified exec for better PTY handling
codex features enable unified_exec
# List all available feature flags
codex features list

Feature flag changes persist to ~/.codex/config.toml. When using profiles, the change is stored in the active profile.

Override any config value for a single run without editing files:

Terminal window
# Use a different model for one task
codex --model gpt-5.1-codex-mini "Quick question about this function"
# Override nested config values
codex -c sandbox_workspace_write.network_access=true "Install this npm package"
# Enable live web search for one run
codex --search "What's the latest React 19 API for this pattern?"
  • Shell completions not working: The eval line must come after compinit in your shell config. For Zsh, add autoload -Uz compinit && compinit before the eval line if you see command not found: compdef.
  • Config not loading: Run codex features list to see which features are active. Check for syntax errors in your TOML file using the schema validation.
  • Credentials lost after restart: Switch from keyring to file credential storage if your OS keychain resets on login. Check cli_auth_credentials_store in config.
  • Wrong profile applied: Profile names are case-sensitive. Verify with codex --profile <name> features list to confirm the profile loads correctly.
  • Environment variables leaking: Audit your shell_environment_policy settings. Run Codex and ask it to echo $SECRET_VAR to verify exclusions work.