Timeout Issues
Problem: Long-running operations timeout Solution: Implement progress reporting or break into smaller operations
Model Context Protocol (MCP) servers are the bridge between Cursor and your external tools. While Cursor comes with many pre-built MCP servers, the real power lies in creating custom servers tailored to your organization’s unique needs.
MCP follows a client-server model where:
Let’s build a simple MCP server that integrates with a hypothetical internal API.
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";import { z } from "zod";
// Create server instanceconst server = new McpServer({ name: "internal-api-server", version: "1.0.0", description: "Connects Cursor to our internal systems"});
// Define a toolserver.tool( "get_user_info", "Fetch user information from internal API", { userId: z.string().describe("The user's ID") }, async ({ userId }) => { // Your API call here const response = await fetch(`https://api.internal.com/users/${userId}`); const data = await response.json();
return { content: [{ type: "text", text: JSON.stringify(data, null, 2) }] }; });
// Start the serverconst transport = new StdioServerTransport();await server.connect(transport);
from mcp.server import Serverfrom mcp.server.stdio import StdioServerTransportfrom mcp.tools import Toolimport asyncioimport jsonimport aiohttp
# Create server instanceserver = Server( name="internal-api-server", version="1.0.0")
# Define a tool@server.tool()async def get_user_info(userId: str) -> str: """Fetch user information from internal API""" async with aiohttp.ClientSession() as session: async with session.get(f"https://api.internal.com/users/{userId}") as response: data = await response.json() return json.dumps(data, indent=2)
# Start the serverasync def main(): transport = StdioServerTransport() await server.connect(transport) await transport.serve()
if __name__ == "__main__": asyncio.run(main())
Add to your ~/.cursor/mcp.json
:
{ "mcpServers": { "internal-api": { "command": "node", "args": ["/path/to/your/server.js"], "env": { "API_KEY": "your-api-key-here" } } }}
Create servers that expose multiple related tools:
// Database analysis MCP serverconst dbServer = new McpServer({ name: "database-analyzer", version: "1.0.0"});
// Tool 1: Schema inspectordbServer.tool( "inspect_schema", "Get database schema information", { table: z.string() }, async ({ table }) => { const schema = await db.getTableSchema(table); return formatSchemaResponse(schema); });
// Tool 2: Query analyzerdbServer.tool( "analyze_query", "Analyze SQL query performance", { query: z.string() }, async ({ query }) => { const plan = await db.explainQuery(query); return formatQueryPlan(plan); });
// Tool 3: Data profilerdbServer.tool( "profile_data", "Profile data distribution in a table", { table: z.string(), column: z.string().optional() }, async ({ table, column }) => { const profile = await db.profileData(table, column); return formatDataProfile(profile); });
Build servers that maintain state across requests:
class StatefulMcpServer { private sessions: Map<string, SessionData> = new Map(); private server: McpServer;
constructor() { this.server = new McpServer({ name: "stateful-workflow", version: "1.0.0" });
this.setupTools(); }
private setupTools() { // Start a session this.server.tool( "start_session", "Initialize a new workflow session", { workflowType: z.string() }, async ({ workflowType }) => { const sessionId = generateId(); this.sessions.set(sessionId, { type: workflowType, state: 'initialized', data: {} });
return { content: [{ type: "text", text: `Session started: ${sessionId}` }] }; } );
// Execute workflow steps this.server.tool( "execute_step", "Execute next step in workflow", { sessionId: z.string(), input: z.any() }, async ({ sessionId, input }) => { const session = this.sessions.get(sessionId); if (!session) throw new Error("Session not found");
// Process based on current state const result = await this.processWorkflowStep(session, input);
return { content: [{ type: "text", text: JSON.stringify(result) }] }; } ); }}
Expose dynamic resources that Cursor can browse:
// Documentation server that provides browseable resourcesserver.resource( "api_docs", "Browse API documentation", async (uri) => { // Parse URI to determine what doc to show const path = uri.replace("api-docs://", "");
if (path === "/") { // Return list of available APIs return { content: [{ type: "text", text: await getApiIndex() }] }; }
// Return specific API documentation const doc = await getApiDoc(path); return { content: [{ type: "text", text: doc }] }; });
// Resources can be hierarchicalserver.listResources = async () => { return [ { uri: "api-docs://users", name: "User API" }, { uri: "api-docs://payments", name: "Payment API" }, { uri: "api-docs://analytics", name: "Analytics API" } ];};
A security team built an MCP server for automated security analysis:
// Main security scanner implementationclass SecurityScannerMcp { private scanners = [ new DependencyScanner(), new SecretScanner(), new VulnerabilityScanner() ];
async setupTools() { // Comprehensive security scan this.server.tool( "security_scan", "Run comprehensive security analysis", { directory: z.string(), scanTypes: z.array(z.enum(['deps', 'secrets', 'vulns'])).optional() }, async ({ directory, scanTypes }) => { const results = [];
for (const scanner of this.scanners) { if (!scanTypes || scanTypes.includes(scanner.type)) { const findings = await scanner.scan(directory); results.push(...findings); } }
// Generate report const report = new SecurityReport(results); return { content: [{ type: "text", text: report.toMarkdown() }] }; } );
// Check specific file this.server.tool( "check_file_security", "Security check a specific file", { filePath: z.string() }, async ({ filePath }) => { const issues = await this.quickScan(filePath); return formatIssues(issues); } ); }}
// Performance monitoring MCP that connects to APM toolsclass PerformanceMcp { private apmClient: APMClient;
constructor() { this.apmClient = new APMClient({ endpoint: process.env.APM_ENDPOINT, apiKey: process.env.APM_API_KEY }); }
setupTools() { // Get performance metrics this.server.tool( "get_performance_metrics", "Fetch current performance metrics", { service: z.string(), timeRange: z.enum(['1h', '24h', '7d']).default('1h') }, async ({ service, timeRange }) => { const metrics = await this.apmClient.getMetrics({ service, from: getTimeFromRange(timeRange), to: new Date() });
return { content: [{ type: "text", text: this.formatMetrics(metrics) }] }; } );
// Analyze slow endpoints this.server.tool( "analyze_slow_endpoints", "Find and analyze slow API endpoints", { threshold: z.number().default(1000), limit: z.number().default(10) }, async ({ threshold, limit }) => { const slowEndpoints = await this.apmClient.getSlowTransactions({ threshold, limit });
const analysis = await this.analyzeEndpoints(slowEndpoints); return { content: [{ type: "text", text: analysis }] }; } ); }}
// In Cursor chat:"Check performance metrics for the payment service"
// MCP server responds with:/*Payment Service Performance (Last 1h):- Response Time: p50=145ms, p95=623ms, p99=1823ms- Throughput: 1,234 req/min- Error Rate: 0.23%- CPU Usage: 67%- Memory: 2.3GB / 4GB
Top Slow Endpoints:1. POST /api/payment/process - 1823ms (p99)2. GET /api/payment/history - 967ms (p99)*/
"Analyze why the payment processing endpoint is slow"
// Detailed analysis with traces and suggestions
// Deployment automation MCPclass DeploymentMcp { setupTools() { // Deploy to environment this.server.tool( "deploy", "Deploy application to specified environment", { environment: z.enum(['dev', 'staging', 'prod']), version: z.string().optional(), dryRun: z.boolean().default(false) }, async ({ environment, version, dryRun }) => { // Pre-deployment checks const checks = await this.runPreDeployChecks(environment); if (!checks.passed) { return { content: [{ type: "text", text: `Pre-deploy checks failed:\n${checks.failures.join('\n')}` }] }; }
// Execute deployment if (!dryRun) { const result = await this.deploy(environment, version); await this.notifyTeam(environment, result);
return { content: [{ type: "text", text: this.formatDeployResult(result) }] }; }
return { content: [{ type: "text", text: "Dry run completed successfully" }] }; } ); }}
// Implement auth in your MCP serverclass SecureMcpServer { private async authenticate(headers: Headers): Promise<boolean> { const token = headers.get('Authorization')?.replace('Bearer ', ''); if (!token) return false;
// Verify token with your auth service return await this.authService.verifyToken(token); }
async handleRequest(request: Request) { if (!await this.authenticate(request.headers)) { throw new Error('Unauthorized'); }
// Process authenticated request }}
// Always validate and sanitize inputsserver.tool( "execute_query", "Run a database query", { query: z.string() .min(1) .max(1000) .refine( (q) => !q.match(/DROP|DELETE|TRUNCATE/i), "Destructive operations not allowed" ) }, async ({ query }) => { // Additional validation const sanitized = sanitizeSQL(query); const result = await db.query(sanitized); return formatResult(result); });
// Use environment variables for secretsconst config = { apiKey: process.env.API_KEY, dbPassword: process.env.DB_PASSWORD, // Never hardcode secrets!};
// Validate required secrets on startupfunction validateEnvironment() { const required = ['API_KEY', 'DB_PASSWORD']; const missing = required.filter(key => !process.env[key]);
if (missing.length > 0) { throw new Error(`Missing required env vars: ${missing.join(', ')}`); }}
import { MockMcpClient } from '@mcp/testing';
describe('SecurityScannerMcp', () => { let server: SecurityScannerMcp; let client: MockMcpClient;
beforeEach(() => { server = new SecurityScannerMcp(); client = new MockMcpClient(server); });
test('security_scan finds vulnerabilities', async () => { const result = await client.callTool('security_scan', { directory: './test-project', scanTypes: ['vulns'] });
expect(result.content[0].text).toContain('Found 3 vulnerabilities'); });});
// Test with actual Cursor connectionasync function testWithCursor() { // Start your MCP server const serverProcess = spawn('node', ['./dist/index.js']);
// Configure test Cursor instance const testConfig = { mcpServers: { "test-server": { command: "node", args: ["./dist/index.js"] } } };
// Run test scenarios // ...}
# Development setupnpm run build:watch
# Configure your local MCP server in Cursor settings# Settings > Features > MCP > Add Server{ "mcpServers": { "my-server-dev": { "command": "node", "args": ["./dist/index.js"], "env": { "NODE_ENV": "development", "DEBUG": "mcp:*" } } }}
{ "name": "@company/mcp-internal-tools", "version": "1.0.0", "bin": { "mcp-internal": "./dist/index.js" }, "publishConfig": { "registry": "https://npm.company.com" }}
// Team installs globallynpm install -g @company/mcp-internal-tools
// Configure in Cursor{ "mcpServers": { "internal-tools": { "command": "mcp-internal" } }}
// Deploy as HTTP serverconst server = new McpHttpServer({ port: 3000, auth: new OAuthProvider()});
// Team configures remote URL{ "mcpServers": { "internal-tools": { "transport": "sse", "url": "https://mcp.company.com/sse" } }}
For large responses, use streaming:
server.tool( "analyze_large_dataset", "Analyze a large dataset", { datasetId: z.string() }, async function* ({ datasetId }) { const dataset = await loadDataset(datasetId);
// Stream results as they're processed for await (const batch of processBatches(dataset)) { yield { content: [{ type: "text", text: `Processed batch: ${JSON.stringify(batch.summary)}\n` }] }; } });
class CachedMcpServer { private cache = new LRUCache<string, any>({ max: 100, ttl: 1000 * 60 * 5 // 5 minutes });
async handleExpensiveOperation(params: any) { const cacheKey = JSON.stringify(params);
// Check cache first const cached = this.cache.get(cacheKey); if (cached) return cached;
// Perform expensive operation const result = await this.expensiveOperation(params);
// Cache result this.cache.set(cacheKey, result); return result; }}
// Use debug package for detailed loggingimport debug from 'debug';const log = debug('mcp:my-server');
server.on('tool_call', (tool, params) => { log('Tool called:', tool, params);});
// Run with DEBUG=mcp:* to see logs
# Install MCP inspectornpm install -g @mcp/inspector
# Test your servermcp-inspect ./my-server.js
# Interactive testing interface> list_tools> call_tool get_user_info {"userId": "123"}
Timeout Issues
Problem: Long-running operations timeout Solution: Implement progress reporting or break into smaller operations
Memory Leaks
Problem: Server memory usage grows over time Solution: Properly cleanup resources, use WeakMaps for caches
Error Handling
Problem: Errors crash the server Solution: Wrap all tool handlers in try-catch, return friendly errors
Now that you can build custom MCP servers:
Remember: The best MCP servers solve real problems. Start with your biggest pain point and build from there. Each custom integration makes Cursor more powerful for your specific workflow.