Advanced Codex Tips
You have mastered the basics of each Codex surface. You know how worktrees work, your config is tuned, and your AGENTS.md is structured. Now you want to push further: running Codex as an MCP server consumed by other agents, pointing it at local models through Ollama, piping structured output into your build system, and tracking every API call through OpenTelemetry. These are the techniques that turn Codex from a coding assistant into a development infrastructure component.
What You’ll Walk Away With
Section titled “What You’ll Walk Away With”- Multi-surface orchestration patterns that combine App, CLI, and Cloud
- Custom model provider configurations for proxies, local models, and Azure
- SDK integration patterns for building custom tooling
- Observability setup with OpenTelemetry for tracking Codex usage
- Advanced sandbox tuning for security-sensitive environments
Multi-Surface Orchestration
Section titled “Multi-Surface Orchestration”The Surface Handoff Pattern
Section titled “The Surface Handoff Pattern”Each surface has a sweet spot. Chain them for maximum effect:
-
Start in the CLI for quick diagnosis:
Terminal window codex "What's causing the TypeScript errors in src/services/?" -
Move to the App for parallel implementation:
- Open a Worktree thread for the fix
- Open another Worktree thread for tests
- Review diffs visually in the App’s diff pane
-
Submit to Cloud for verification:
Terminal window codex cloud exec --env staging --attempts 2 \"Run the full integration test suite against these changes" -
Back to the CLI for the final commit:
Terminal window codex exec --full-auto "Create a PR with a summary of all changes"
Cross-Surface Context
Section titled “Cross-Surface Context”The App and CLI share config, AGENTS.md, and skills — but not thread history. To pass context between surfaces:
- Use the App’s integrated terminal to run CLI commands
- Copy relevant summaries from App threads into CLI prompts
- Use
codex resumeto continue an App session from the CLI
When the IDE Extension and App are synced, they share thread visibility and auto-context. This is the smoothest cross-surface flow.
Custom Model Providers
Section titled “Custom Model Providers”Route Through a Proxy
Section titled “Route Through a Proxy”model = "gpt-5.1"model_provider = "proxy"
[model_providers.proxy]name = "OpenAI via internal proxy"base_url = "http://proxy.internal.company.com"env_key = "OPENAI_API_KEY"Use Ollama for Local Models
Section titled “Use Ollama for Local Models”[model_providers.ollama]name = "Ollama"base_url = "http://localhost:11434/v1"
oss_provider = "ollama"Then run: codex --oss "Explain this function"
Azure OpenAI
Section titled “Azure OpenAI”[model_providers.azure]name = "Azure"base_url = "https://YOUR_PROJECT.openai.azure.com/openai"env_key = "AZURE_OPENAI_API_KEY"query_params = { api-version = "2025-04-01-preview" }wire_api = "responses"Provider-Specific Tuning
Section titled “Provider-Specific Tuning”[model_providers.openai]request_max_retries = 4stream_max_retries = 10stream_idle_timeout_ms = 300000Increase stream_idle_timeout_ms if you see timeout errors on long-running tasks. Increase retry counts for unreliable network conditions.
Quick Endpoint Override
Section titled “Quick Endpoint Override”If you just need to point the built-in OpenAI provider at a different endpoint (e.g., for data residency):
export OPENAI_BASE_URL="https://us.api.openai.com/v1"codexNo config changes needed.
Model Reasoning and Output Control
Section titled “Model Reasoning and Output Control”Adjust Reasoning Effort
Section titled “Adjust Reasoning Effort”model_reasoning_effort = "high" # For complex architectural decisions# ormodel_reasoning_effort = "low" # For simple, fast tasksOptions: minimal, low, medium, high, xhigh (model-dependent).
Control Verbosity
Section titled “Control Verbosity”model_verbosity = "low" # Shorter responses, less explanationmodel_reasoning_summary = "concise" # Brief reasoning summariesFor CI logs, suppress reasoning entirely:
hide_agent_reasoning = trueContext Window Tuning
Section titled “Context Window Tuning”model_context_window = 128000model_auto_compact_token_limit = 100000 # Compact earlier to leave headroomSet model_auto_compact_token_limit lower than the context window to trigger compaction before the window fills completely.
Codex as an MCP Server
Section titled “Codex as an MCP Server”Run Codex itself as an MCP server so other agents can consume it:
codex mcp-serverThis runs Codex over stdio, allowing another tool or agent to connect and use Codex as a tool. Useful for building multi-agent systems where a coordinator dispatches tasks to Codex.
Sandbox Tuning
Section titled “Sandbox Tuning”Workspace Write with Network
Section titled “Workspace Write with Network”sandbox_mode = "workspace-write"
[sandbox_workspace_write]network_access = truewritable_roots = ["/Users/me/.pyenv/shims", "/tmp"]exclude_tmpdir_env_var = falseexclude_slash_tmp = falseGrant Write to Additional Directories
Section titled “Grant Write to Additional Directories”Use --add-dir instead of broadening the sandbox:
codex --cd apps/frontend --add-dir ../backend --add-dir ../shared \ "Coordinate API changes between frontend and backend"This grants scoped write access without opening danger-full-access.
Test Sandbox Behavior
Section titled “Test Sandbox Behavior”Use the codex sandbox command to test what a command can do under your current settings:
codex sandbox -- ls /etccodex sandbox -- cat /etc/passwdcodex sandbox --full-auto -- npm testThis runs the command under the same sandbox Codex uses internally, so you can verify policies before the agent encounters them.
Observability with OpenTelemetry
Section titled “Observability with OpenTelemetry”Enable OTel Export
Section titled “Enable OTel Export”[otel]environment = "production"log_user_prompt = false # Don't export raw prompts
[otel.exporter.otlp-http]endpoint = "https://otel-collector.internal.company.com/v1/logs"protocol = "binary"headers = { "x-otlp-api-key" = "${OTLP_TOKEN}" }What Gets Exported
Section titled “What Gets Exported”Codex emits structured log events for:
codex.conversation_starts— Model, settings, sandbox policycodex.api_request— Status, duration, error detailscodex.tool_decision— Approved/denied, by config vs usercodex.tool_result— Duration, success, output snippet
Disable Anonymous Metrics
Section titled “Disable Anonymous Metrics”Codex sends anonymous usage data by default. Disable it:
[analytics]enabled = falseThis is separate from OTel export — analytics goes to OpenAI, OTel goes to your infrastructure.
Advanced Notification Patterns
Section titled “Advanced Notification Patterns”Custom Notification Script
Section titled “Custom Notification Script”notify = ["python3", "/path/to/notify.py"]The script receives a JSON argument with event details:
#!/usr/bin/env python3import json, subprocess, sys
def main(): notification = json.loads(sys.argv[1]) if notification.get("type") != "agent-turn-complete": return 0 title = f"Codex: {notification.get('last-assistant-message', 'Done!')}" subprocess.run([ "terminal-notifier", "-title", title, "-message", " ".join(notification.get("input-messages", [])), "-group", "codex-" + notification.get("thread-id", ""), ]) return 0
if __name__ == "__main__": sys.exit(main())TUI Notification Filtering
Section titled “TUI Notification Filtering”[tui]notifications = ["agent-turn-complete", "approval-requested"]notification_method = "osc9" # Desktop notifications via OSC 9 escape sequenceFeature Flags Worth Knowing
Section titled “Feature Flags Worth Knowing”| Flag | Status | What It Does |
|---|---|---|
shell_snapshot | Beta | Snapshots shell environment for faster repeated commands |
unified_exec | Beta | Uses PTY-backed exec for better terminal handling |
remote_compaction | Experimental | Offloads context compaction to the server |
request_rule | Stable | Smart approval suggestions based on command patterns |
Enable with:
codex features enable shell_snapshotcodex features enable unified_execPrompt Editor for Long Instructions
Section titled “Prompt Editor for Long Instructions”For complex, multi-paragraph prompts, press Ctrl + G in the TUI to open your configured editor. Set the editor:
export VISUAL=code # Or vim, nvim, nano, etc.Write the full prompt in your editor, save and close, and Codex sends it. This is far more ergonomic than typing long instructions in the composer.
When This Breaks
Section titled “When This Breaks”- Custom provider authentication fails: Verify the
env_keyenvironment variable is set and exported. Usecodex login statusto check auth. - OTel events not appearing: Check that
exporteris set tootlp-httporotlp-grpc, notnone. Verify the endpoint is reachable from your machine. - Sandbox too restrictive for your workflow: Use
codex sandboxto test specific commands. Addwritable_rootsfor directories the agent needs. - MCP server mode disconnects: Codex exits when the downstream client closes the connection. Ensure your client maintains the stdio pipe.
- Feature flags disappear after restart: Feature flags are persisted to
config.tomlbut profile-scoped flags only apply when that profile is active.
What’s Next
Section titled “What’s Next”- Team Collaboration — Share these advanced configurations across your team
- Setup and Configuration — Foundation config that supports these techniques
- AGENTS.md Optimization — Layer instructions on top of advanced config