Skip to content

Bulk Changes with Cloud and Worktrees

Your team just adopted a new logging library and you need to replace the old one across 47 files in 12 packages. A regex find-and-replace handles the simple cases but mangles the complex ones. Running Codex on each file sequentially takes all day. The batch operations pattern lets you distribute the work across parallel agents, each handling a slice of the codebase, and merge the results cleanly.

  • A workflow for distributing large-scale changes across multiple parallel Codex agents
  • Patterns for both worktree-based (local) and cloud-based batch operations
  • Strategies for consistency checking when multiple agents make similar changes independently
  • Scripts that orchestrate batch codex exec calls with progress tracking
  1. Define the change — Write a precise description of what needs to change and how.
  2. Partition the work — Split by file, directory, package, or service.
  3. Execute in parallel — Each partition runs as a separate Codex thread (worktree or cloud).
  4. Verify consistency — Run a unifying check (linter, type checker, test suite) across all results.
  5. Merge — Combine the results into a single branch or PR.

For changes that span a single repository, use worktree threads in the App:

For scripted batch operations, use codex exec with a shell loop:

batch-migrate.sh
#!/bin/bash
PACKAGES=("api" "web" "worker" "shared" "cli")
BRANCH="chore/migrate-logger"
for pkg in "${PACKAGES[@]}"; do
echo "Processing $pkg..."
codex exec --full-auto --cd "packages/$pkg" \
"Replace all imports of old-logger with @company/logger.
Follow the migration guide in docs/logging-migration.md.
Run the package-specific tests to verify." &
done
wait
echo "All packages processed. Run the full test suite to verify."

For machine-readable output, add --json and pipe results:

Terminal window
codex exec --json --full-auto --cd "packages/$pkg" \
"Migrate logging" 2>/dev/null | jq -r '.item.text // empty' | tail -1

When changes are heavy enough to block your machine, submit cloud tasks:

Terminal window
for pkg in api web worker shared; do
codex cloud exec --env monorepo-env \
"In the $pkg package, replace old-logger with @company/logger.
Follow the patterns in docs/logging-migration.md. Run tests."
done

Cloud tasks run in isolated containers, so there is no conflict between them.

After merging results from multiple agents, run unifying checks:

Terminal window
# Type check across the entire monorepo
pnpm run type-check
# Lint for consistent style
pnpm run lint
# Full test suite
pnpm run test
# Verify no old imports remain
grep -r "from 'old-logger'" packages/ && echo "MIGRATION INCOMPLETE" || echo "MIGRATION COMPLETE"

If any check fails, open a new Codex thread to fix the inconsistencies:

The logging migration left some inconsistencies. Run pnpm lint and fix
all warnings. Then run pnpm test and fix any failures. The old-logger
package should not be imported anywhere.
  • Agents make inconsistent choices: Each agent operates independently. When multiple approaches are valid, they may choose differently. Add explicit constraints to the prompt: “Use the pattern shown in packages/api/src/logger.ts as the reference.”
  • Merge conflicts between batches: Apply batches in a consistent order (alphabetical by package) and resolve conflicts between each.
  • One package blocks the rest: If one package has unique constraints, handle it in a separate thread with specialized instructions.
  • Token costs explode: Batch operations multiply costs linearly. Use GPT-5.1-Codex-Mini for straightforward migrations and reserve GPT-5.3-Codex for packages with complex logic.