1. Write Tests First
AI generates comprehensive test cases from requirements
Ta treść nie jest jeszcze dostępna w Twoim języku.
Testing is where AI assistance truly shines. Instead of manually writing repetitive test cases or struggling to achieve coverage goals, Cursor’s AI can generate comprehensive test suites, debug failures, and even perform end-to-end testing through browser automation.
The traditional TDD cycle (Red → Green → Refactor) becomes supercharged with AI assistance. Here’s the enhanced workflow:
1. Write Tests First
AI generates comprehensive test cases from requirements
2. Run Tests (Red)
Tests fail initially - this validates they’re testing real behavior
3. Implement Code (Green)
AI writes code to make tests pass, iterating automatically
4. Refactor (Blue)
AI optimizes code while ensuring tests still pass
Enable YOLO mode to unlock automated test execution:
Settings → General → YOLO Mode → Enable
Recommended prompt:any kind of tests are always allowed like vitest, npm test,nr test, etc. also basic build commands like build, tsc, etc.creating files and making directories (like touch, mkdir, etc)is always ok too
With YOLO mode, Agent can:
Let’s implement a complex function using AI-powered TDD:
Agent: Create a function that converts markdown to HTML.Write tests first, then the code, then run the testsand update the code until tests pass.
Watch as Agent:
Creates comprehensive test file
describe('markdownToHtml', () => { it('should convert headers', () => { expect(markdownToHtml('# Title')).toBe('<h1>Title</h1>'); expect(markdownToHtml('## Subtitle')).toBe('<h2>Subtitle</h2>'); });
it('should convert bold text', () => { expect(markdownToHtml('**bold**')).toBe('<strong>bold</strong>'); });
it('should handle code blocks', () => { expect(markdownToHtml('`code`')).toBe('<code>code</code>'); });
// AI generates many more edge cases...});
Implements the function
function markdownToHtml(markdown) { // Initial implementation}
Runs tests and sees failures
✗ 6 tests failed
Iteratively fixes the code
Updating implementation to handle headers...Running tests again...✗ 3 tests failed
Adding bold text support...Running tests again...✓ All tests passed!
Agent: Create unit tests for this UserProfile component.Test all props, user interactions, and edge cases.Use React Testing Library.
Agent generates:
describe('UserProfile', () => { it('renders user name and avatar', () => { const user = { name: 'John', avatar: 'url' }; render(<UserProfile user={user} />); expect(screen.getByText('John')).toBeInTheDocument(); expect(screen.getByRole('img')).toHaveAttribute('src', 'url'); });
it('handles missing avatar gracefully', () => { // Edge case testing });
it('calls onEdit when edit button clicked', () => { // Interaction testing });});
Agent: Write comprehensive tests for this API endpoint.Include success cases, error handling, and edge cases.
Agent creates:
describe('POST /api/users', () => { it('creates user with valid data', async () => { const response = await request(app) .post('/api/users') .send({ name: 'Test', email: 'test@example.com' });
expect(response.status).toBe(201); expect(response.body).toHaveProperty('id'); });
it('validates email format', async () => { const response = await request(app) .post('/api/users') .send({ name: 'Test', email: 'invalid' });
expect(response.status).toBe(400); expect(response.body.error).toContain('email'); });
// More test cases...});
Agent: Create tests for this async data fetching function.Include loading states, success, errors, and retries.
AI handles complex async scenarios:
describe('fetchUserData', () => { it('returns user data on success', async () => { const mockUser = { id: 1, name: 'Test' }; fetch.mockResolvedValueOnce({ ok: true, json: async () => mockUser });
const result = await fetchUserData(1); expect(result).toEqual(mockUser); });
it('retries on network failure', async () => { fetch.mockRejectedValueOnce(new Error('Network error')); fetch.mockResolvedValueOnce({ ok: true, json: async () => ({ id: 1 }) });
const result = await fetchUserData(1); expect(fetch).toHaveBeenCalledTimes(2); expect(result).toBeDefined(); });});
Leverage Puppeteer/Playwright MCP for comprehensive E2E testing:
Agent: Set up E2E tests for our login flow using Puppeteer.Test successful login, invalid credentials, and forgot password.
Agent creates comprehensive E2E tests:
describe('Login Flow E2E', () => { it('successful login redirects to dashboard', async () => { // Agent uses Puppeteer MCP await page.goto('http://localhost:3000/login'); await page.type('#email', 'user@example.com'); await page.type('#password', 'correct-password'); await page.click('button[type="submit"]');
await page.waitForNavigation(); expect(page.url()).toContain('/dashboard');
// Take screenshot for visual verification await page.screenshot({ path: 'login-success.png' }); });
it('shows error for invalid credentials', async () => { await page.goto('http://localhost:3000/login'); await page.type('#email', 'user@example.com'); await page.type('#password', 'wrong-password'); await page.click('button[type="submit"]');
const error = await page.waitForSelector('.error-message'); const text = await error.evaluate(el => el.textContent); expect(text).toContain('Invalid credentials'); });});
Agent: Add visual regression tests for our key pages.Compare screenshots and flag any visual changes.
When tests fail, AI becomes your debugging partner:
1. Analyze Failure
Agent: This test is failing. Add console.logsto understand what's happening.
2. Run with Logs
Agent runs test with added logging,captures output for analysis
3. Identify Issue
Based on logs: "The issue is the dateformat doesn't match expected pattern"
4. Fix and Verify
Agent fixes the code and reruns teststo confirm resolution
User: The user authentication test is failing intermittently
Agent: I'll debug this intermittent failure. Let me add loggingand timing information to understand the issue.
[Adds strategic console.logs and performance marks][Runs test multiple times]
Found it! The test fails when the database connection poolis exhausted. The previous test isn't cleaning up connections.Let me fix this by adding proper cleanup...
[Adds afterEach cleanup hook][Runs test 10 times - all pass]
The intermittent failure is now resolved.
Agent: Analyze our test coverage and write teststo reach 90% coverage for the auth module.
Agent will:
File | % Stmts | % Branch | % Funcs | % Lines |-------------------|---------|----------|---------|---------|auth/login.js | 75.00 | 66.67 | 100.00 | 75.00 |auth/register.js | 90.00 | 85.00 | 100.00 | 88.89 |auth/reset.js | 45.00 | 33.33 | 66.67 | 45.00 |
Agent: I'll focus on auth/reset.js which has low coverage.Creating tests for password reset edge cases...
Agent: Mock all external API calls in our payment tests.Include both success and failure scenarios.
// AI generates comprehensive mocksjest.mock('@/services/stripe', () => ({ createPaymentIntent: jest.fn(), confirmPayment: jest.fn(), handleWebhook: jest.fn()}));
describe('Payment Processing', () => { beforeEach(() => { // Reset mocks between tests jest.clearAllMocks(); });
it('handles Stripe API errors gracefully', async () => { const stripeError = new Error('Card declined'); stripeError.code = 'card_declined';
createPaymentIntent.mockRejectedValueOnce(stripeError);
// Test error handling... });});
Agent: Create data-driven tests for our validation logic.Test with various input combinations.
describe('Input Validation', () => { const testCases = [ { input: 'valid@email.com', expected: true, description: 'valid email' }, { input: 'invalid.email', expected: false, description: 'missing @' }, { input: '@example.com', expected: false, description: 'missing local part' }, { input: 'test@', expected: false, description: 'missing domain' }, // AI generates many more cases... ];
testCases.forEach(({ input, expected, description }) => { it(`validates ${description}`, () => { expect(isValidEmail(input)).toBe(expected); }); });});
Agent: Create performance tests for our API endpoints.Measure response times and identify bottlenecks.
describe('API Performance', () => { it('responds within 200ms for user list', async () => { const start = performance.now(); const response = await request(app).get('/api/users'); const duration = performance.now() - start;
expect(response.status).toBe(200); expect(duration).toBeLessThan(200); });
it('handles 100 concurrent requests', async () => { const requests = Array(100).fill(null).map(() => request(app).get('/api/users') );
const responses = await Promise.all(requests); const successCount = responses.filter(r => r.status === 200).length;
expect(successCount).toBeGreaterThan(95); // 95% success rate });});
Agent: Set up quality gates that prevent merging if:- Test coverage drops below 80%- Any tests fail- Performance benchmarks aren't met
Test Behavior, Not Implementation
Focus on what the code does, not how it does it
Maintain Test Independence
Each test should run in isolation without dependencies
Use Descriptive Names
Test names should clearly describe what they verify
Keep Tests Simple
One assertion per test when possible
Issue: Tests pass sometimes, fail others
Solution:Agent: Debug this flaky test by running it 20 timesand identifying what varies between runs.
Issue: Test suite takes too long
Solution:Agent: Profile our test suite and identify theslowest tests. Suggest optimizations.
Issue: Tests pass but feature is broken
Solution:Agent: Review these tests and identify why they'renot catching the actual bug. Strengthen assertions.
Your testing game is now transformed with AI assistance: