Skip to content

Unit Test Strategies and Generation

You ask the AI to “write unit tests for UserService” and it generates 40 tests. They all pass. Coverage is 95%. You feel good until a customer reports that users with plus signs in their email addresses cannot register. None of those 40 tests covered that case because the AI tested the happy path in 40 slightly different ways. Volume is not coverage. The right tests at the right boundaries catch bugs — and that requires specific, intentional prompting.

  • Prompt patterns that generate tests focused on behavior and edge cases, not just line coverage
  • Mocking strategies that keep tests fast and reliable without over-mocking
  • Techniques for testing error handling, concurrency, and boundary conditions
  • Workflows for maintaining tests as code evolves
  • Mutation testing integration to verify your tests actually catch bugs

The most effective AI-generated tests describe what the code should do, not how it does it internally.

Open the file to test and use Agent mode:

@src/services/order.service.ts
Generate unit tests for OrderService.calculateTotal. Focus on BEHAVIOR:
1. "should apply percentage discount correctly" (10% off $100 = $90)
2. "should apply fixed discount correctly" ($15 off $100 = $85)
3. "should not allow total below zero" (discount > subtotal)
4. "should calculate tax after discount" (tax on discounted amount, not original)
5. "should handle empty cart" (zero items, no discount)
6. "should round to 2 decimal places" (avoid floating point weirdness)
7. "should reject negative quantities" (throw ValidationError)
8. "should handle mixed currency items" (throw CurrencyMismatchError)
Test through the public API only. Do not mock internal methods.
Mock only the database and external service dependencies.
Follow patterns in @src/services/__tests__/payment.service.test.ts

The Boundary Rule: Mock at the Boundary, Not Internally

Section titled “The Boundary Rule: Mock at the Boundary, Not Internally”

Stop hardcoding test data. Use factories that generate realistic data with overridable defaults.

Error handling is where most AI-generated tests are weakest. Prompt specifically for error scenarios.

AI tools can generate concurrency tests that most developers skip.

Coverage metrics lie. A test that executes a line does not necessarily verify its behavior. Mutation testing introduces small changes (mutations) to your code and checks if tests catch them.

  1. Install Stryker Mutator

    Terminal window
    npm install --save-dev @stryker-mutator/core @stryker-mutator/jest-runner @stryker-mutator/typescript-checker
  2. Configure for your project

    Terminal window
    npx stryker init
    # Select Jest runner, TypeScript checker
    # Set mutate to: src/services/**/*.ts (start small)
  3. Run and analyze results

    Terminal window
    npx stryker run
    # Review the HTML report - focus on surviving mutations
  4. Use AI to kill surviving mutants

    Feed the surviving mutations back to the AI for targeted test generation.

When your implementation changes, AI can update the tests intelligently.

“AI generates tests that all look the same.” You gave a generic prompt. Be specific about the scenarios you want tested. List the edge cases, error conditions, and boundary values explicitly.

“Tests pass locally but fail in CI.” Check for test isolation issues. AI-generated tests sometimes share mutable state between tests. Ensure beforeEach resets all mocks and test state. Add --runInBand to CI if tests have hidden parallelism issues.

“Mutation score is low despite high coverage.” Your tests are executing code without verifying its output. Focus on adding assertions for return values, side effects, and error conditions. Coverage without assertions is meaningless.

“Test suite is slow after AI generated hundreds of tests.” Review for redundancy. Ask the AI: “Analyze these tests and identify sets that cover the same code paths. Which tests can be removed without reducing mutation coverage?”