Skip to content

Performance Optimization - Cursor IDE

Your social media analytics dashboard is struggling. The app takes 8 seconds to load, search operations timeout, the dashboard freezes when updating real-time data, and memory usage grows to 2GB after an hour. With 100,000 daily active users complaining, you need to achieve sub-second load times and 60fps interactions.

By completing this lesson, you’ll master:

  • AI-assisted performance profiling
  • Identifying bottlenecks systematically
  • Optimizing React rendering performance
  • Database query optimization
  • Memory leak detection and fixes
  • Building performance monitoring systems
  • Understanding of web performance basics
  • Chrome DevTools experience
  • Completed previous lessons
  • Basic knowledge of profiling tools

Transform a slow application by:

  • Reducing initial load from 8s to under 2s
  • Achieving 60fps UI interactions
  • Cutting memory usage by 75%
  • Optimizing database queries to under 50ms
  • Implementing real-time updates efficiently
  • Building performance monitoring
  1. Initial Performance Analysis

Start with comprehensive profiling:

"Analyze this Chrome Performance profile and Lighthouse report:
- Identify the top 5 performance bottlenecks
- Categorize issues (rendering, scripting, network)
- Suggest priority order for fixes
- Estimate impact of each optimization
@performance-profile.json @lighthouse-report.json"
  1. Create Performance Baseline
"Create a performance testing suite that measures:
- Time to Interactive (TTI)
- First Contentful Paint (FCP)
- Cumulative Layout Shift (CLS)
- Memory usage over time
- API response times
- React component render times
Set up automated performance tracking"

Example baseline script:

performance/baseline.ts
import { chromium } from 'playwright';
import lighthouse from 'lighthouse';
export async function measurePerformance(url: string) {
const browser = await chromium.launch();
const context = await browser.newContext();
const page = await context.newPage();
// Enable performance tracking
await context.addInitScript(() => {
window.performanceMarks = {};
// Override React's profiler
if (window.React && window.React.Profiler) {
const originalProfiler = window.React.Profiler;
window.React.Profiler = function(props) {
const onRender = (id, phase, actualDuration) => {
window.performanceMarks[id] = {
phase,
duration: actualDuration,
timestamp: performance.now()
};
props.onRender?.(id, phase, actualDuration);
};
return originalProfiler({ ...props, onRender });
};
}
});
// Measure page metrics
const startTime = Date.now();
await page.goto(url, { waitUntil: 'networkidle' });
const metrics = await page.evaluate(() => {
const navigation = performance.getEntriesByType('navigation')[0];
const paint = performance.getEntriesByType('paint');
return {
// Navigation timings
ttfb: navigation.responseStart - navigation.requestStart,
domContentLoaded: navigation.domContentLoadedEventEnd - navigation.domContentLoadedEventStart,
loadComplete: navigation.loadEventEnd - navigation.loadEventStart,
// Paint timings
fcp: paint.find(p => p.name === 'first-contentful-paint')?.startTime,
lcp: performance.getEntriesByType('largest-contentful-paint')[0]?.startTime,
// React profiling
componentRenders: window.performanceMarks,
// Memory
memory: performance.memory ? {
usedJSHeapSize: performance.memory.usedJSHeapSize,
totalJSHeapSize: performance.memory.totalJSHeapSize,
jsHeapSizeLimit: performance.memory.jsHeapSizeLimit
} : null
};
});
await browser.close();
return metrics;
}
  1. Profile Database Queries

Switch to Agent mode:

@src/api
"Add query profiling to all database operations:
- Log query execution time
- Identify N+1 queries
- Find missing indexes
- Detect slow queries (>100ms)
- Track query frequency
Create a query performance dashboard"
  1. React Rendering Optimization
@src/components/Dashboard.tsx
"Optimize this React component for performance:
- Add proper memoization
- Implement virtualization for lists
- Use React.lazy for code splitting
- Optimize re-renders with useMemo/useCallback
- Fix performance anti-patterns
Show before/after comparison"

Example optimization:

// Before: Slow dashboard with unnecessary re-renders
const Dashboard = () => {
const [data, setData] = useState([]);
const [filter, setFilter] = useState('');
// Bad: Creates new array every render
const filteredData = data.filter(item =>
item.name.includes(filter)
);
// Bad: Inline function causes child re-renders
return (
<div>
{filteredData.map(item => (
<ExpensiveComponent
key={item.id}
item={item}
onClick={() => handleClick(item.id)}
/>
))}
</div>
);
};
// After: Optimized with memoization and virtualization
const Dashboard = () => {
const [data, setData] = useState([]);
const [filter, setFilter] = useState('');
// Good: Memoized filtered data
const filteredData = useMemo(() =>
data.filter(item =>
item.name.toLowerCase().includes(filter.toLowerCase())
),
[data, filter]
);
// Good: Stable callback reference
const handleClick = useCallback((id: string) => {
console.log('Clicked:', id);
}, []);
// Good: Virtualized list for performance
const rowRenderer = useCallback(({ index, style }) => {
const item = filteredData[index];
return (
<div style={style}>
<MemoizedExpensiveComponent
item={item}
onClick={handleClick}
/>
</div>
);
}, [filteredData, handleClick]);
return (
<AutoSizer>
{({ height, width }) => (
<List
height={height}
width={width}
rowCount={filteredData.length}
rowHeight={80}
rowRenderer={rowRenderer}
overscanRowCount={5}
/>
)}
</AutoSizer>
);
};
// Memoized child component
const MemoizedExpensiveComponent = memo(ExpensiveComponent,
(prevProps, nextProps) => {
return prevProps.item.id === nextProps.item.id &&
prevProps.item.updated === nextProps.item.updated;
}
);
  1. Bundle Size Optimization
"Analyze and optimize the JavaScript bundle:
- Identify large dependencies
- Suggest tree-shaking opportunities
- Implement code splitting strategy
- Lazy load heavy components
- Optimize webpack configuration
Target: Reduce bundle by 60%"
  1. Image and Asset Optimization
@public/assets
"Optimize all images and assets:
- Convert images to WebP
- Implement responsive images
- Add lazy loading
- Use CDN for static assets
- Implement service worker caching"
  1. API Response Optimization
@src/api/routes
"Optimize API endpoints for performance:
- Add response caching with Redis
- Implement pagination properly
- Use database projections
- Add field filtering
- Compress responses
Each endpoint should respond in under 100ms"

Example optimized endpoint:

api/posts/optimized.ts
import { Redis } from 'ioredis';
import { compress } from 'zlib';
import { promisify } from 'util';
const redis = new Redis();
const gzipAsync = promisify(compress);
export async function getOptimizedPosts(req: Request, res: Response) {
const {
page = 1,
limit = 20,
fields = ['id', 'title', 'summary', 'author', 'createdAt'],
sort = '-createdAt'
} = req.query;
// Check cache first
const cacheKey = `posts:${page}:${limit}:${fields.join(',')}:${sort}`;
const cached = await redis.get(cacheKey);
if (cached) {
res.setHeader('X-Cache', 'HIT');
res.setHeader('Content-Type', 'application/json');
res.setHeader('Content-Encoding', 'gzip');
return res.send(Buffer.from(cached, 'base64'));
}
// Optimized database query
const posts = await db.post.findMany({
select: fields.reduce((acc, field) => {
acc[field] = true;
return acc;
}, {}),
skip: (page - 1) * limit,
take: limit,
orderBy: parseSort(sort),
// Use database-level filtering
where: {
status: 'published',
deletedAt: null
}
});
// Include pagination metadata
const total = await db.post.count({
where: { status: 'published', deletedAt: null }
});
const response = {
data: posts,
meta: {
page,
limit,
total,
pages: Math.ceil(total / limit)
}
};
// Compress response
const json = JSON.stringify(response);
const compressed = await gzipAsync(json);
// Cache for 5 minutes
await redis.setex(cacheKey, 300, compressed.toString('base64'));
res.setHeader('X-Cache', 'MISS');
res.setHeader('Content-Type', 'application/json');
res.setHeader('Content-Encoding', 'gzip');
res.send(compressed);
}
  1. Database Query Optimization
@src/queries
"Optimize these slow database queries:
- Add appropriate indexes
- Rewrite inefficient queries
- Implement query result caching
- Use materialized views where appropriate
- Add query execution plans"
  1. Real-time Updates Optimization
"Optimize WebSocket/real-time updates:
- Implement intelligent throttling
- Use differential updates
- Add message queuing
- Implement backpressure handling
- Optimize serialization format"
  1. Memory Leak Detection
"Analyze this heap snapshot and find memory leaks:
- Identify retained objects
- Find event listener leaks
- Detect closure memory issues
- Locate DOM node leaks
- Track growing arrays/maps
@heap-snapshot.heapsnapshot"

Example memory leak fix:

// Before: Memory leak from event listeners
class DataManager {
constructor() {
this.data = [];
this.handlers = new Map();
}
subscribe(event, handler) {
// Leak: Never removes handlers
if (!this.handlers.has(event)) {
this.handlers.set(event, []);
}
this.handlers.get(event).push(handler);
window.addEventListener('resize', () => {
this.recalculate();
});
}
recalculate() {
// Leak: Keeps growing
this.data.push(new Array(1000000));
}
}
// After: Proper cleanup and memory management
class DataManager {
constructor() {
this.data = [];
this.handlers = new Map();
this.resizeHandler = this.recalculate.bind(this);
this.maxDataSize = 100;
}
subscribe(event, handler) {
if (!this.handlers.has(event)) {
this.handlers.set(event, new Set());
}
this.handlers.get(event).add(handler);
// Add listener only once
if (this.handlers.get(event).size === 1) {
window.addEventListener('resize', this.resizeHandler);
}
// Return unsubscribe function
return () => {
const handlers = this.handlers.get(event);
handlers.delete(handler);
if (handlers.size === 0) {
window.removeEventListener('resize', this.resizeHandler);
this.handlers.delete(event);
}
};
}
recalculate() {
// Implement circular buffer
if (this.data.length >= this.maxDataSize) {
this.data.shift();
}
this.data.push(this.calculateNewData());
}
destroy() {
// Clean up everything
window.removeEventListener('resize', this.resizeHandler);
this.handlers.clear();
this.data = [];
}
}
  1. Implement Memory Management
"Create memory management system:
- Object pooling for frequent allocations
- Weak references for caches
- Automatic cleanup routines
- Memory usage monitoring
- Alert on memory spikes"
  1. Optimize Data Structures
"Optimize data structures for memory efficiency:
- Use typed arrays where appropriate
- Implement virtual scrolling
- Use IndexedDB for large datasets
- Implement data pagination
- Add garbage collection hints"
  1. Build Performance Dashboard
"Create real-time performance monitoring:
- Track Core Web Vitals
- Monitor API response times
- Track memory usage
- Alert on performance regression
- Create performance budgets"

Example monitoring setup:

monitoring/performance-monitor.ts
export class PerformanceMonitor {
private metrics: Map<string, PerformanceMetric[]> = new Map();
private observers: PerformanceObserver[] = [];
initialize() {
// Monitor Core Web Vitals
this.observeWebVitals();
// Monitor React performance
this.monitorReactPerformance();
// Monitor API calls
this.interceptFetch();
// Monitor memory
this.monitorMemory();
// Send metrics every 10 seconds
setInterval(() => this.sendMetrics(), 10000);
}
private observeWebVitals() {
// LCP Observer
const lcpObserver = new PerformanceObserver((list) => {
const entries = list.getEntries();
const lastEntry = entries[entries.length - 1];
this.recordMetric('lcp', lastEntry.startTime);
});
lcpObserver.observe({ entryTypes: ['largest-contentful-paint'] });
// FID Observer
const fidObserver = new PerformanceObserver((list) => {
for (const entry of list.getEntries()) {
this.recordMetric('fid', entry.processingStart - entry.startTime);
}
});
fidObserver.observe({ entryTypes: ['first-input'] });
// CLS Observer
let clsValue = 0;
const clsObserver = new PerformanceObserver((list) => {
for (const entry of list.getEntries()) {
if (!entry.hadRecentInput) {
clsValue += entry.value;
this.recordMetric('cls', clsValue);
}
}
});
clsObserver.observe({ entryTypes: ['layout-shift'] });
this.observers.push(lcpObserver, fidObserver, clsObserver);
}
private monitorReactPerformance() {
if (window.React && window.React.Profiler) {
// Wrap React.Profiler to capture all renders
const originalProfiler = window.React.Profiler;
window.React.Profiler = (props) => {
const onRender = (id, phase, actualDuration, baseDuration) => {
this.recordMetric(`react-render-${id}`, actualDuration, {
phase,
baseDuration,
timestamp: performance.now()
});
// Alert on slow renders
if (actualDuration > 16) { // 60fps threshold
console.warn(`Slow render detected: ${id} took ${actualDuration}ms`);
}
props.onRender?.(id, phase, actualDuration, baseDuration);
};
return originalProfiler({ ...props, onRender });
};
}
}
private interceptFetch() {
const originalFetch = window.fetch;
window.fetch = async (...args) => {
const startTime = performance.now();
const url = args[0].toString();
try {
const response = await originalFetch(...args);
const duration = performance.now() - startTime;
this.recordMetric('api-call', duration, {
url,
status: response.status,
method: args[1]?.method || 'GET'
});
return response;
} catch (error) {
const duration = performance.now() - startTime;
this.recordMetric('api-error', duration, {
url,
error: error.message
});
throw error;
}
};
}
private monitorMemory() {
if (!performance.memory) return;
setInterval(() => {
const memoryInfo = performance.memory;
this.recordMetric('memory', memoryInfo.usedJSHeapSize, {
totalJSHeapSize: memoryInfo.totalJSHeapSize,
jsHeapSizeLimit: memoryInfo.jsHeapSizeLimit
});
// Alert on memory spikes
const usage = memoryInfo.usedJSHeapSize / memoryInfo.jsHeapSizeLimit;
if (usage > 0.9) {
console.error('Memory usage critical:', `${(usage * 100).toFixed(1)}%`);
}
}, 5000);
}
private recordMetric(name: string, value: number, metadata?: any) {
if (!this.metrics.has(name)) {
this.metrics.set(name, []);
}
this.metrics.get(name)!.push({
value,
timestamp: Date.now(),
metadata
});
}
private async sendMetrics() {
const metricsToSend = {};
for (const [name, values] of this.metrics.entries()) {
if (values.length > 0) {
metricsToSend[name] = values;
this.metrics.set(name, []); // Clear sent metrics
}
}
if (Object.keys(metricsToSend).length > 0) {
try {
await fetch('/api/metrics', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(metricsToSend)
});
} catch (error) {
console.error('Failed to send metrics:', error);
}
}
}
}
  1. Set Performance Budgets
"Create performance budget configuration:
- Maximum bundle sizes
- Loading time thresholds
- Memory usage limits
- API response time SLAs
- Frame rate requirements
Integrate with CI/CD pipeline"
  1. Automate Performance Testing
"Set up automated performance testing:
- Run Lighthouse on every PR
- Load test critical user paths
- Memory leak detection tests
- Bundle size tracking
- Performance regression alerts"

Profile before optimizing:

// Profile React components
const ProfiledComponent = () => {
return (
<Profiler id="Dashboard" onRender={onRenderCallback}>
<Dashboard />
</Profiler>
);
};
function onRenderCallback(id, phase, actualDuration) {
console.log(`${id} (${phase}) took ${actualDuration}ms`);
}

Optimize incrementally:

// Step 1: Basic lazy loading
const Dashboard = lazy(() => import('./Dashboard'));
// Step 2: Route-based splitting
const routes = [
{
path: '/dashboard',
component: lazy(() => import('./Dashboard')),
preload: true
}
];
// Step 3: Predictive prefetching
const prefetchComponent = (path) => {
const route = routes.find(r => r.path === path);
if (route?.preload) {
route.component.preload();
}
};

Common optimization patterns:

// Debounce expensive operations
const debouncedSearch = useMemo(
() => debounce((query) => {
performSearch(query);
}, 300),
[]
);
// Use Web Workers for heavy computation
const worker = new Worker('/workers/data-processor.js');
worker.postMessage({ cmd: 'process', data: largeDataset });
// Implement virtual scrolling
<VirtualList
items={millionItems}
itemHeight={50}
renderItem={renderItem}
buffer={5}
/>

Problem: Janky scrolling and interactions

Solution:

// Use CSS containment
.item {
contain: layout style paint;
}
// Batch DOM updates
requestAnimationFrame(() => {
// All DOM updates here
});
// Use transform instead of position
transform: translateX(${x}px);
// .cursorignore for better indexing
# Exclude generated files
dist/
build/
*.min.js
*.bundle.js
coverage/
node_modules/
.next/
out/
# Exclude large data files
*.csv
*.sql
data/
fixtures/
# AI can suggest more exclusions:
"Analyze project and suggest .cursorignore entries
for optimal indexing performance"

Benefits:

  • 70% faster indexing
  • Reduced memory usage
  • More relevant search results
  • Better AI context

AI-Driven Performance Analysis

// Regular performance audits
"Analyze this application for performance issues:
- Check for N+1 queries
- Find unnecessary re-renders
- Identify memory leaks
- Detect blocking operations
- Review bundle sizes"
// AI provides:
// 1. Ranked list of issues
// 2. Impact assessment
// 3. Fix suggestions
// 4. Implementation plan

Push performance further:

  1. Edge Computing

    • Deploy compute to edge locations
    • Implement regional caching
    • Use service workers effectively
    • Optimize for global users
  2. Advanced Techniques

    • WebAssembly for computation
    • GPU acceleration with WebGL
    • Shared ArrayBuffers
    • OffscreenCanvas rendering
  3. Real User Monitoring

    • Track performance by user segment
    • A/B test optimizations
    • Predictive performance alerts
    • User journey optimization

Your optimization succeeds when:

  • ✅ Page loads in under 2 seconds
  • ✅ 60fps interactions achieved
  • ✅ Memory usage stable over time
  • ✅ Core Web Vitals all green
  • ✅ API responses under 100ms p95
  • ✅ Bundle size reduced by 60%
  • ✅ No memory leaks detected
  • ✅ Performance budget maintained

Teams using these techniques achieve:

  • 80% reduction in load times
  • 90% fewer performance complaints
  • 50% lower infrastructure costs
  • 2x improvement in conversion rates

Before deploying optimizations:

  • Baseline metrics recorded
  • All changes measured
  • No regressions introduced
  • Memory leaks tested
  • Bundle size checked
  • Performance budget passing
  • Monitoring configured
  • Alerts set up
  1. Measure First: Never optimize without data
  2. User-Centric: Focus on perceived performance
  3. Incremental: Small improvements compound
  4. Automate: Make performance part of CI/CD
  5. Monitor Always: Performance degrades over time
  • Initial audit: 2 hours
  • Frontend optimization: 4 hours
  • Backend optimization: 3 hours
  • Memory fixes: 2 hours
  • Monitoring setup: 2 hours
  • Total: ~13 hours (saves hundreds of hours of user time)

You’ve mastered performance optimization. Ready for more?

WebAssembly

Use WASM for compute-intensive tasks

Edge Computing

Deploy globally for minimal latency

Performance Culture

Build performance-first team practices

Continue to Security Audit & Fixes →