Skip to content

Scheduled Task Automation with Codex

It is Monday morning. Over the weekend, three dependency vulnerabilities were published, someone merged a commit with a console.log statement, and the test coverage on the payments module dropped from 85% to 72%. You find out about these during standup. But if you had set up Codex automations on Friday, you would have arrived to a prioritized inbox with each issue already identified, analyzed, and — in some cases — fixed.

  • Ready-to-use automation prompts for the most common recurring development tasks
  • A workflow for testing automations safely before scheduling them
  • Techniques for combining automations with skills for complex multi-step workflows
  • Security and sandbox configuration for unattended background tasks

Automations run locally in the Codex App on a schedule you define. Each run:

  1. Creates a new worktree (for Git repositories) so it never touches your main checkout
  2. Executes the prompt with your default sandbox settings
  3. Reports findings to your Automations inbox in the Codex App sidebar
  4. Archives the run automatically if there is nothing to report

You need the Codex App running and the project available on disk for automations to execute.

Never schedule an automation blindly. Test the prompt manually in a regular thread first.

  1. Open the Codex App and create a new thread in the same project.
  2. Choose Worktree mode (since automations use worktrees).
  3. Paste your automation prompt and run it.
  4. Review the results: Did it find the right things? Did it make appropriate changes? Did it stay within scope?

Iterate on the prompt until you are satisfied, then create the automation with the tested prompt.

Here are the automations most teams should run from day one:

This is similar to the $recent-code-bugfix skill pattern from the official Codex docs. You can also create a skill and invoke it from the automation:

Check my commits from the last 24h and submit a $recent-code-bugfix.
Weekly architecture check:
1. Pull the latest from origin/main
2. Compare the current codebase structure against docs/ARCHITECTURE.md
3. Check for:
- New directories or modules not documented
- Documented modules that no longer exist
- New cross-module dependencies that violate documented boundaries
- New external service integrations not mentioned in the architecture doc
4. If drift is detected, update docs/ARCHITECTURE.md to match reality
Report what changed and whether it represents intentional evolution or accidental drift.

Skills let you encapsulate complex workflows that automations can invoke. Create a skill, save it to your personal or repo skills directory, and reference it with $skill-name in the automation prompt.

Example: Create a $test-coverage-report skill that knows how to run coverage, compare to baselines, and format the output. Then the automation becomes:

Run $test-coverage-report on the latest main branch. If coverage dropped on any module, investigate and propose tests to restore it.

Skills make automations more maintainable: update the skill once, and every automation that uses it gets the improvement.

Step 4: Configure Security for Background Tasks

Section titled “Step 4: Configure Security for Background Tasks”

Automations run with your default sandbox settings. Since they run unattended, security configuration matters.

Recommended sandbox settings for automations:

  • workspace-write (default recommendation) — lets Codex read and modify files in the project. Tool calls that need network or access outside the workspace fail unless explicitly allowed.
  • Use rules to selectively allow specific commands (for example, allow npm audit which needs network access).
# Example rule allowing npm audit in automations
[[rules]]
match = "npm audit"
action = "allow"

Approval policy:

Automations use approval_policy = "never" when your organization allows it, so they can run without waiting for manual approval. If your admin has restricted this, automations fall back to your selected approval mode — which may mean they pause and wait for approval, defeating the purpose. Check your organization’s requirements.

The Automations pane in the Codex App sidebar is your inbox. It has a Triage section where automation runs with findings appear. You can:

  • Filter to show All runs or only Unread ones
  • Click into a run to see the full conversation and any diffs
  • Archive runs you have addressed
  • Pin runs you want to keep (this also prevents the worktree from being cleaned up)

Build a routine: every morning, check your automation inbox before standup. Address findings, sync fixes to local, and archive completed items.

Automation runs while you are not looking and makes unintended changes. If sandbox is set to full access and the prompt is vague, Codex might modify files in unexpected ways. Use workspace-write mode and review the first few runs of any new automation before trusting it. The worktree isolation protects your main checkout, but the worktree changes can still be synced or turned into PRs.

Worktree buildup from frequent automations. Daily automations create one worktree per run. Over a week, that is 7 worktrees just for one automation. Codex cleans up worktrees older than 4 days (when you have more than 10), but frequent automations can still accumulate. Archive runs promptly and do not pin them unless you need to keep the worktree.

Automation reports the same findings every day. If a linting error has been in the codebase for weeks and nobody fixes it, the daily automation keeps reporting it. Add an ignore mechanism: “Skip issues listed in .automation-ignore.json. Only report new findings since the last run.”

The Codex App must be running. Automations are local — they run on your machine in the Codex App. If you close the App or your machine is asleep, automations do not run. Enable “Prevent sleep while running” in the App’s settings if you want overnight automations to complete. For truly unattended automation, consider the Codex SDK or GitHub Action for server-side execution.