Strategic Logging
AI places optimal debug logs automatically
Ta treść nie jest jeszcze dostępna w Twoim języku.
Transform debugging from hours of frustration into minutes of systematic problem-solving. Learn AI-powered debugging workflows that help you identify, isolate, and fix bugs faster than ever before.
Traditional debugging is often:
AI-powered debugging is:
Impact: Reduce debugging time by 40-70% while finding root causes more reliably.
Strategic Logging
AI places optimal debug logs automatically
Root Cause Analysis
Systematic approach to finding bug origins
Pattern Recognition
AI identifies similar bugs and solutions
Fix Validation
Ensure fixes don’t introduce new bugs
Describe the Symptom
Bug: User authentication fails intermittentlyError: "Token validation failed"Frequency: ~30% of login attemptsNo clear pattern identified
Let AI Add Strategic Logs
// Prompt to Cursor@file auth/login.js@file auth/token.js
Add debug logging to trace the authentication flow.Focus on:- Token generation and validation- Timing issues- State changes- External API calls
AI Generates Targeted Logging
// AI adds strategic console.logsasync function validateToken(token) { console.log('[AUTH] Starting token validation', { tokenLength: token?.length, timestamp: Date.now(), tokenPrefix: token?.substring(0, 10) });
try { console.log('[AUTH] Decoding token...'); const decoded = jwt.verify(token, SECRET); console.log('[AUTH] Token decoded successfully', { userId: decoded.userId, exp: decoded.exp, timeUntilExpiry: decoded.exp - Date.now() });
// More strategic logs... } catch (error) { console.error('[AUTH] Token validation failed', { error: error.message, token: token?.substring(0, 20), stack: error.stack }); }}
Run and Collect Logs
# Use YOLO mode to run automaticallynpm test -- --grep "authentication"# ornpm run dev > debug.log 2>&1
Analyze with AI
@file debug.log
Analyze these logs and identify:1. The root cause of intermittent failures2. Any patterns in the failures3. Suggested fixes
## Bug Investigation Template
### 1. Symptom Documentation- **What happens**: [Exact behavior]- **Expected behavior**: [What should happen]- **Error messages**: [Copy exact errors]- **Steps to reproduce**: [Minimal steps]
### 2. Context Gathering@file [error location]@code [specific function]@recent-changes@linter-errors
### 3. AI Analysis RequestBased on the above context:1. What are the most likely causes?2. What additional information would help?3. Suggest diagnostic code to add
// Step 1: Ask AI to add memory profiling/** * @cursor Add memory leak detection: * - Heap snapshots at intervals * - Object allocation tracking * - Event listener counting * - Memory usage logging */
// AI generates:const heapUsed = process.memoryUsage().heapUsed;console.log(`Memory: ${Math.round(heapUsed / 1024 / 1024)}MB`);
// Track object countsconst objectCounts = {};setInterval(() => { const snapshot = {}; // AI adds sophisticated tracking}, 5000);
// Step 1: Ask AI to detect React-specific leaks/** * @cursor Add React memory leak detection: * - Track component mount/unmount * - Monitor effect cleanup * - Check event listener removal * - Detect subscription leaks */
// AI generates:useEffect(() => { console.log(`[LEAK-CHECK] ${componentName} mounted`); const activeSubscriptions = [];
// Track all subscriptions const originalSubscribe = EventEmitter.prototype.on; EventEmitter.prototype.on = function(...args) { activeSubscriptions.push({ event: args[0], listener: args[1], stack: new Error().stack }); return originalSubscribe.apply(this, args); };
return () => { console.log(`[LEAK-CHECK] ${componentName} unmounting`, { activeSubscriptions: activeSubscriptions.length }); };}, []);
# Step 1: Use AI to create git bisect script@cursor Create a script that:1. Uses git bisect to find when the bug was introduced2. Runs our test automatically3. Identifies the breaking commit
# AI generates:#!/bin/bash# bisect-bug.sh
echo "Starting automated git bisect..."
# Define the test commandtest_command="npm test -- --grep 'user authentication'"
# Start bisectgit bisect startgit bisect bad HEADgit bisect good v1.2.0 # Last known good version
# Run automated bisectgit bisect run bash -c "$test_command && echo 'good' || echo 'bad'"
# Get the resultbreaking_commit=$(git bisect view --oneline -1)echo "Bug introduced in: $breaking_commit"
# Resetgit bisect reset
@git commit [breaking_commit_hash]@file [changed_files_in_commit]
Analyze this commit and explain:1. What changed that could cause [bug description]2. Why this change breaks [feature]3. Suggest a fix that preserves the intent
// Ask AI to add Redux debugging/** * @cursor Add comprehensive Redux debugging: * - Log all actions with payload * - Track state changes * - Identify which component triggered actions * - Detect state mutations */
// AI creates middleware:const debugMiddleware = store => next => action => { console.group(`[REDUX] ${action.type}`); console.log('Payload:', action.payload); console.log('Previous State:', store.getState()); console.log('Triggered by:', new Error().stack.split('\n')[7]);
const result = next(action);
console.log('New State:', store.getState()); console.groupEnd();
return result;};
// Debug React Context issues/** * @cursor Create Context debugger that: * - Tracks all context updates * - Shows which component triggered updates * - Identifies unnecessary re-renders * - Logs context value changes */
// AI generates:export const DebugProvider = ({ children, name }) => { const [value, setValue] = useState(initialValue); const updateCount = useRef(0);
const debugSetValue = useCallback((newValue) => { updateCount.current++; console.log(`[CONTEXT-${name}] Update #${updateCount.current}`, { from: value, to: newValue, triggeredBy: new Error().stack, timestamp: Date.now() }); setValue(newValue); }, [value, name]);
return ( <Context.Provider value={{ value, setValue: debugSetValue }}> {children} </Context.Provider> );};
// Step 1: Ask AI to add performance tracking/** * @cursor Add performance profiling to identify bottlenecks: * - Function execution timing * - Database query duration * - API call latency * - Rendering performance * - Memory allocation */
// AI generates comprehensive profiling:const performanceTracker = { marks: new Map(),
start(label) { this.marks.set(label, { start: performance.now(), memory: process.memoryUsage().heapUsed }); },
end(label) { const mark = this.marks.get(label); if (!mark) return;
const duration = performance.now() - mark.start; const memoryDelta = process.memoryUsage().heapUsed - mark.memory;
console.log(`[PERF] ${label}`, { duration: `${duration.toFixed(2)}ms`, memory: `${(memoryDelta / 1024).toFixed(2)}KB`, slow: duration > 100 });
if (duration > 100) { console.warn(`[PERF] Slow operation detected: ${label}`); } }};
// Usage throughout codeperformanceTracker.start('database-query');const results = await db.query(sql);performanceTracker.end('database-query');
# When dealing with data processing bugs@cursor Help me implement binary search debugging:
1. The bug occurs when processing large datasets2. Works fine with small data (< 100 items)3. Fails with large data (> 10000 items)
Create a binary search approach to find the exact data sizewhere it starts failing.
// Compare working vs broken scenarios/** * @cursor Create a comparison debugger: * * Working scenario: User A can login * Broken scenario: User B cannot login * * Add logging that captures and compares: * - All function parameters * - Return values * - State at each step * - Timing differences * * Output a diff showing where paths diverge */
// State history tracking/** * @cursor Create time-travel debugging for our app: * - Store last 50 state changes * - Allow replay of state transitions * - Identify which transition caused the bug * - Export state history for analysis */
const stateHistory = { states: [], maxHistory: 50,
record(state, action) { this.states.push({ state: JSON.parse(JSON.stringify(state)), action, timestamp: Date.now(), stack: new Error().stack });
if (this.states.length > this.maxHistory) { this.states.shift(); } },
replay(fromIndex = 0) { // Replay state transitions },
export() { // Export for analysis }};
#!/bin/bash# auto-debug.sh - AI-powered debugging automation
echo "🔍 Starting AI-powered debugging session..."
# Step 1: Collect error informationecho "📝 Collecting error logs..."npm test 2>&1 | tee test-output.log
# Step 2: Open logs in Cursor for AI analysisecho "🤖 Opening in Cursor for analysis..."cursor test-output.log
# Step 3: Use Cursor's AI to analyze and fix# In Cursor, you can:# - Use @test-output.log to reference the error log# - Ask AI to analyze the errors and suggest fixes# - Use AI to generate patches for the identified issues
# Step 4: Run verificationecho "✅ Verifying fixes..."npm test
echo "📊 Debug session complete!"
Metric | Traditional | AI-Powered | Improvement |
---|---|---|---|
Average bug fix time | 4.5 hours | 1.2 hours | 73% faster |
Root cause accuracy | 60% | 95% | 58% better |
Regression rate | 15% | 3% | 80% fewer |
Developer frustration | High | Low | Priceless |
Master debugging workflows to:
Continue with:
Remember: The best debugger is the one that helps you understand not just what’s broken, but why it broke and how to prevent it from breaking again.