Pattern Selection
AI recommends appropriate architectural patterns for your use case
System architecture forms the foundation of successful software projects. This lesson demonstrates how Cursor IDE’s AI capabilities transform architectural design from an abstract, experience-dependent process into a systematic, guided approach that incorporates best practices and patterns.
Traditional architecture design requires years of experience and deep knowledge of patterns, trade-offs, and emerging technologies. AI assistance democratizes this expertise, helping developers make informed architectural decisions and avoid common pitfalls.
Pattern Selection
AI recommends appropriate architectural patterns for your use case
Scalability Design
AI helps design systems that scale horizontally and vertically
Technology Selection
AI suggests optimal technology stacks based on requirements
Trade-off Analysis
AI explains pros, cons, and implications of architectural decisions
Domain Analysis
// Ask AI to analyze domain and suggest boundaries"Analyze this e-commerce system and suggest microservice boundaries:- User management and authentication- Product catalog and inventory- Shopping cart and checkout- Order processing and fulfillment- Payment processing- Notification system- Analytics and reporting
Consider: data ownership, scaling needs, team boundaries, and deployment independence"
Service Design
// AI designs service architecture"Design microservices architecture with:- Service communication patterns (sync/async)- Data consistency strategies- Service discovery mechanism- API gateway design- Distributed tracing- Circuit breaker patterns"
// AI generates architectureexport interface ServiceArchitecture { services: { name: string; responsibilities: string[]; api: APIDefinition; database: DatabaseConfig; dependencies: string[]; scalingStrategy: ScalingConfig; }[];
communication: { synchronous: RestAPIConfig; asynchronous: MessageQueueConfig; graphql?: GraphQLGatewayConfig; };
infrastructure: { serviceDiscovery: 'consul' | 'eureka' | 'kubernetes'; loadBalancing: LoadBalancerConfig; monitoring: MonitoringStack; };}
Implementation Blueprint
# AI creates implementation plan"Generate Kubernetes manifests for microservices:- Service definitions with proper labels- Deployment configurations with resource limits- ConfigMaps for environment-specific settings- Secrets management- Network policies for service communication- Ingress configuration"
# AI generates service manifestapiVersion: v1kind: Servicemetadata: name: user-service labels: app: user-service tier: backendspec: selector: app: user-service ports: - port: 80 targetPort: 8080 protocol: TCP type: ClusterIP---apiVersion: apps/v1kind: Deploymentmetadata: name: user-servicespec: replicas: 3 selector: matchLabels: app: user-service template: metadata: labels: app: user-service version: v1 spec: containers: - name: user-service image: user-service:1.0.0 ports: - containerPort: 8080 env: - name: DATABASE_URL valueFrom: secretKeyRef: name: user-db-secret key: connection-string - name: KAFKA_BROKERS value: "kafka-0.kafka:9092,kafka-1.kafka:9092" resources: requests: memory: "256Mi" cpu: "250m" limits: memory: "512Mi" cpu: "500m" livenessProbe: httpGet: path: /health port: 8080 initialDelaySeconds: 30 periodSeconds: 10 readinessProbe: httpGet: path: /ready port: 8080 initialDelaySeconds: 5 periodSeconds: 5
// AI implements event sourcing"Design event sourcing system for order management:- Event store design- Event versioning strategy- Projection handling- Snapshot optimization- CQRS implementation"
// AI generates event sourcing frameworkinterface Event { id: string; aggregateId: string; type: string; version: number; timestamp: Date; data: any; metadata: EventMetadata;}
abstract class AggregateRoot { private uncommittedEvents: Event[] = []; protected version: number = 0;
constructor(public readonly id: string) {}
protected applyEvent(event: Event): void { this.handleEvent(event); this.version = event.version; }
protected raiseEvent(eventData: any): void { const event: Event = { id: generateId(), aggregateId: this.id, type: eventData.constructor.name, version: this.version + 1, timestamp: new Date(), data: eventData, metadata: this.getEventMetadata() };
this.applyEvent(event); this.uncommittedEvents.push(event); }
getUncommittedEvents(): Event[] { return this.uncommittedEvents; }
markEventsAsCommitted(): void { this.uncommittedEvents = []; }
abstract handleEvent(event: Event): void;
private getEventMetadata(): EventMetadata { return { userId: getCurrentUser(), correlationId: getCorrelationId(), causationId: getCausationId() }; }}
// Order aggregate implementationclass Order extends AggregateRoot { private items: OrderItem[] = []; private status: OrderStatus = 'pending'; private customerId: string;
static create(customerId: string): Order { const order = new Order(generateId()); order.raiseEvent(new OrderCreated(customerId)); return order; }
addItem(productId: string, quantity: number, price: number): void { if (this.status !== 'pending') { throw new Error('Cannot add items to non-pending order'); }
this.raiseEvent(new ItemAdded(productId, quantity, price)); }
submit(): void { if (this.items.length === 0) { throw new Error('Cannot submit empty order'); }
this.raiseEvent(new OrderSubmitted()); }
handleEvent(event: Event): void { switch (event.type) { case 'OrderCreated': this.customerId = event.data.customerId; break; case 'ItemAdded': this.items.push({ productId: event.data.productId, quantity: event.data.quantity, price: event.data.price }); break; case 'OrderSubmitted': this.status = 'submitted'; break; } }}
// Event store implementationclass EventStore { async saveEvents(events: Event[]): Promise<void> { const transaction = await this.db.transaction();
try { for (const event of events) { await transaction.insert('events', { event_id: event.id, aggregate_id: event.aggregateId, event_type: event.type, event_version: event.version, event_data: JSON.stringify(event.data), event_metadata: JSON.stringify(event.metadata), created_at: event.timestamp }); }
await transaction.commit();
// Publish events to message bus await this.publishEvents(events); } catch (error) { await transaction.rollback(); throw error; } }
async getEvents( aggregateId: string, fromVersion?: number ): Promise<Event[]> { const query = this.db .select('*') .from('events') .where('aggregate_id', aggregateId) .orderBy('event_version');
if (fromVersion) { query.where('event_version', '>', fromVersion); }
const rows = await query;
return rows.map(row => ({ id: row.event_id, aggregateId: row.aggregate_id, type: row.event_type, version: row.event_version, timestamp: row.created_at, data: JSON.parse(row.event_data), metadata: JSON.parse(row.event_metadata) })); }}
// AI designs message-driven architecture"Create message-driven architecture with:- Message broker selection (Kafka/RabbitMQ/SQS)- Message schemas and versioning- Retry and DLQ strategies- Idempotency handling- Message ordering guarantees"
// AI implements messaging frameworkinterface MessageBus { publish<T>(topic: string, message: Message<T>): Promise<void>; subscribe<T>( topic: string, handler: MessageHandler<T>, options?: SubscriptionOptions ): Promise<Subscription>;}
class KafkaMessageBus implements MessageBus { private producer: KafkaProducer; private consumers: Map<string, KafkaConsumer> = new Map();
async publish<T>(topic: string, message: Message<T>): Promise<void> { const record = { topic, key: message.key, value: JSON.stringify(message), headers: { 'message-id': message.id, 'correlation-id': message.correlationId, 'schema-version': message.schemaVersion, 'content-type': 'application/json' }, partition: message.partition };
await this.producer.send({ topic, messages: [record] });
// Emit metrics this.metrics.increment('messages.published', { topic, messageType: message.type }); }
async subscribe<T>( topic: string, handler: MessageHandler<T>, options: SubscriptionOptions = {} ): Promise<Subscription> { const consumer = new KafkaConsumer({ groupId: options.consumerGroup || `${topic}-consumer`, ...this.config });
await consumer.connect(); await consumer.subscribe({ topic, fromBeginning: options.fromBeginning });
const wrappedHandler = this.wrapHandler(handler, options);
consumer.run({ eachMessage: async ({ message, partition }) => { const parsedMessage = this.parseMessage<T>(message);
try { await wrappedHandler(parsedMessage);
// Commit offset after successful processing await consumer.commitOffsets([{ topic, partition, offset: (parseInt(message.offset) + 1).toString() }]); } catch (error) { await this.handleError(error, parsedMessage, options); } } });
this.consumers.set(topic, consumer);
return { unsubscribe: () => this.unsubscribe(topic) }; }
private wrapHandler<T>( handler: MessageHandler<T>, options: SubscriptionOptions ): MessageHandler<T> { return async (message: Message<T>) => { // Idempotency check if (options.idempotent) { const processed = await this.checkIdempotency(message.id); if (processed) { this.logger.info('Message already processed', { messageId: message.id }); return; } }
// Process message await handler(message);
// Mark as processed if (options.idempotent) { await this.markProcessed(message.id); } }; }
private async handleError( error: Error, message: Message<any>, options: SubscriptionOptions ): Promise<void> { this.logger.error('Message processing failed', { error: error.message, messageId: message.id, topic: message.topic });
// Retry logic if (options.retry) { const retryCount = message.headers?.['retry-count'] || 0;
if (retryCount < options.retry.maxAttempts) { // Publish to retry topic with delay await this.publishRetry(message, retryCount + 1); } else { // Send to DLQ await this.publishToDLQ(message, error); } } }}
// AI implements saga pattern"Implement distributed saga for order processing:- Orchestration vs choreography- Compensation logic- State persistence- Timeout handling- Monitoring and debugging"
// AI creates saga orchestratorabstract class Saga<TState> { protected state: TState; private steps: SagaStep<TState>[] = []; private currentStep: number = 0;
constructor( private sagaId: string, private repository: SagaRepository ) {}
protected addStep(step: SagaStep<TState>): void { this.steps.push(step); }
async execute(): Promise<void> { try { // Load existing state if resuming await this.loadState();
// Execute remaining steps while (this.currentStep < this.steps.length) { const step = this.steps[this.currentStep];
try { await this.executeStep(step); this.currentStep++; await this.saveState(); } catch (error) { // Compensate and throw await this.compensate(); throw error; } }
// Mark saga as completed await this.complete(); } catch (error) { await this.fail(error); throw error; } }
private async executeStep(step: SagaStep<TState>): Promise<void> { this.logger.info(`Executing step: ${step.name}`, { sagaId: this.sagaId, step: this.currentStep });
// Set timeout for step const timeout = setTimeout(() => { throw new Error(`Step ${step.name} timed out`); }, step.timeout || 30000);
try { this.state = await step.execute(this.state); clearTimeout(timeout); } catch (error) { clearTimeout(timeout); throw error; } }
private async compensate(): Promise<void> { this.logger.info('Starting compensation', { sagaId: this.sagaId });
// Execute compensation in reverse order for (let i = this.currentStep - 1; i >= 0; i--) { const step = this.steps[i];
if (step.compensate) { try { await step.compensate(this.state); } catch (error) { this.logger.error('Compensation failed', { sagaId: this.sagaId, step: step.name, error: error.message }); } } } }
private async saveState(): Promise<void> { await this.repository.save({ sagaId: this.sagaId, type: this.constructor.name, state: this.state, currentStep: this.currentStep, status: 'running', updatedAt: new Date() }); }}
// Order processing sagaclass OrderProcessingSaga extends Saga<OrderSagaState> { constructor(sagaId: string, order: Order) { super(sagaId);
this.state = { orderId: order.id, customerId: order.customerId, items: order.items, paymentId: null, inventoryReservations: [], shipmentId: null };
// Define saga steps this.addStep({ name: 'ReserveInventory', execute: async (state) => { const reservations = await this.inventoryService .reserveItems(state.items); return { ...state, inventoryReservations: reservations }; }, compensate: async (state) => { await this.inventoryService .releaseReservations(state.inventoryReservations); }, timeout: 10000 });
this.addStep({ name: 'ProcessPayment', execute: async (state) => { const payment = await this.paymentService .processPayment(state.orderId, state.totalAmount); return { ...state, paymentId: payment.id }; }, compensate: async (state) => { if (state.paymentId) { await this.paymentService.refund(state.paymentId); } }, timeout: 30000 });
this.addStep({ name: 'CreateShipment', execute: async (state) => { const shipment = await this.shippingService .createShipment(state.orderId, state.items); return { ...state, shipmentId: shipment.id }; }, compensate: async (state) => { if (state.shipmentId) { await this.shippingService.cancelShipment(state.shipmentId); } }, timeout: 15000 }); }}
Load Balancing
Distribute traffic across instances effectively
Session Management
Handle sessions in distributed environment
Data Partitioning
Shard data for horizontal scaling
Cache Strategy
Implement distributed caching
// AI designs scalable architecture"Design horizontally scalable architecture for social media platform:- Handle millions of concurrent users- Real-time messaging and notifications- Media storage and delivery- Global distribution- Cost optimization"
// AI generates scalability designinterface ScalableArchitecture { // Load balancing tier loadBalancing: { global: { type: 'GeoDNS'; regions: string[]; failoverStrategy: 'active-passive'; }; regional: { type: 'ALB' | 'NLB'; algorithm: 'least-connections'; healthChecks: HealthCheckConfig; }; };
// Application tier application: { autoScaling: { minInstances: number; maxInstances: number; targetCPU: number; scaleUpThreshold: number; scaleDownThreshold: number; }; deployment: { strategy: 'blue-green' | 'rolling' | 'canary'; rollbackTriggers: string[]; }; };
// Data tier data: { primary: { type: 'Aurora PostgreSQL'; replication: 'multi-master'; readReplicas: number; sharding: ShardingStrategy; }; cache: { type: 'Redis Cluster'; evictionPolicy: 'LRU'; replication: boolean; }; search: { type: 'Elasticsearch'; shards: number; replicas: number; }; };
// Message queue tier messaging: { type: 'Kafka'; partitions: number; replicationFactor: number; retentionHours: number; };}
// Sharding implementationclass ShardManager { private shards: Map<string, DatabaseShard> = new Map();
constructor(private config: ShardingConfig) { this.initializeShards(); }
private initializeShards(): void { for (let i = 0; i < this.config.shardCount; i++) { const shard = new DatabaseShard({ id: i, connectionString: this.config.getConnectionString(i), keyRange: this.calculateKeyRange(i) });
this.shards.set(shard.id, shard); } }
getShardForKey(key: string): DatabaseShard { const hash = this.hashKey(key); const shardId = hash % this.config.shardCount; return this.shards.get(shardId.toString()); }
async reshardData(newShardCount: number): Promise<void> { // AI implements resharding logic const migrationPlan = this.createMigrationPlan(newShardCount);
for (const migration of migrationPlan) { await this.migrateShard(migration); } }
private hashKey(key: string): number { // Consistent hashing implementation return createHash('sha256') .update(key) .digest() .readUInt32BE(0); }}
// AI implements multi-level caching"Design caching strategy for e-commerce platform:- Browser cache for static assets- CDN for global distribution- Application-level cache- Database query cache- Cache invalidation strategy"
class CacheArchitecture { // Level 1: Browser Cache configureBrowserCache() { return { static: { maxAge: 31536000, // 1 year immutable: true }, dynamic: { maxAge: 0, mustRevalidate: true, etag: true } }; }
// Level 2: CDN Configuration configureCDN() { return { provider: 'CloudFront', origins: [ { domain: 'api.example.com', cacheByHeaders: ['Accept', 'Authorization'], cacheByQueryString: ['version', 'lang'] } ], behaviors: [ { path: '/api/*', ttl: 300, compress: true }, { path: '/static/*', ttl: 86400, compress: true } ] }; }
// Level 3: Application Cache class ApplicationCache { private localCache = new LRUCache<string, any>({ max: 1000, ttl: 1000 * 60 * 5 // 5 minutes });
private redisCache: RedisClient;
async get<T>(key: string): Promise<T | null> { // Check local cache first const local = this.localCache.get(key); if (local) return local;
// Check Redis const cached = await this.redisCache.get(key); if (cached) { const parsed = JSON.parse(cached); this.localCache.set(key, parsed); return parsed; }
return null; }
async set<T>( key: string, value: T, ttl: number = 3600 ): Promise<void> { // Set in both caches this.localCache.set(key, value); await this.redisCache.setex( key, ttl, JSON.stringify(value) );
// Publish invalidation event for other instances await this.publishInvalidation(key); }
async invalidate(pattern: string): Promise<void> { // Clear local cache for (const key of this.localCache.keys()) { if (key.match(pattern)) { this.localCache.delete(key); } }
// Clear Redis keys const keys = await this.redisCache.keys(pattern); if (keys.length > 0) { await this.redisCache.del(...keys); }
// Notify other instances await this.publishInvalidation(pattern); } }}
// AI implements zero trust architecture"Design zero trust security architecture:- Service-to-service authentication- End-to-end encryption- Principle of least privilege- Continuous verification- Security monitoring"
class ZeroTrustArchitecture { // Service mesh configuration configureServiceMesh() { return { type: 'Istio', mtls: { mode: 'STRICT', certRotation: '24h' }, authorization: { defaultPolicy: 'DENY', rules: this.generateAuthorizationRules() }, observability: { tracing: true, metrics: true, accessLogs: true } }; }
// API Gateway security class SecureAPIGateway { async authenticateRequest(request: Request): Promise<AuthContext> { // Extract token const token = this.extractToken(request); if (!token) { throw new UnauthorizedError('Missing authentication token'); }
// Verify JWT const claims = await this.verifyJWT(token);
// Check token binding if (claims.cnf) { await this.verifyTokenBinding(request, claims.cnf); }
// Verify device trust const deviceTrust = await this.verifyDeviceTrust(request); if (!deviceTrust.trusted) { throw new UnauthorizedError('Untrusted device'); }
// Check user risk score const riskScore = await this.calculateRiskScore(claims.sub, request); if (riskScore > 0.7) { await this.triggerMFA(claims.sub); }
return { userId: claims.sub, permissions: await this.getPermissions(claims.sub), sessionId: claims.sid, riskScore }; }
private async verifyJWT(token: string): Promise<JWTClaims> { // Verify signature const decoded = jwt.verify(token, this.publicKey, { algorithms: ['RS256'], issuer: this.config.issuer, audience: this.config.audience });
// Check revocation const revoked = await this.checkRevocation(decoded.jti); if (revoked) { throw new UnauthorizedError('Token revoked'); }
return decoded; } }
// Data encryption class EncryptionService { async encryptSensitiveData( data: any, classification: DataClassification ): Promise<EncryptedData> { // Choose encryption based on classification const strategy = this.getEncryptionStrategy(classification);
// Generate DEK (Data Encryption Key) const dek = await this.generateDEK();
// Encrypt data const encrypted = await strategy.encrypt(data, dek);
// Encrypt DEK with KEK (Key Encryption Key) const encryptedDEK = await this.kms.encrypt(dek, { keyId: strategy.kekId, context: { classification: classification, timestamp: Date.now() } });
return { data: encrypted, dek: encryptedDEK, algorithm: strategy.algorithm, classification }; } }}
// AI designs performance architecture"Create high-performance architecture for real-time analytics:- Sub-second query response- Handle 1M events/second- Real-time aggregations- Historical data analysis- Cost-effective storage"
class PerformanceArchitecture { // Event ingestion pipeline class EventIngestionPipeline { private buffer: RingBuffer<Event>; private batchProcessor: BatchProcessor;
async ingest(event: Event): Promise<void> { // Add to ring buffer for batching this.buffer.add(event);
// Process if batch is full if (this.buffer.size >= this.batchSize) { await this.processBatch(); } }
private async processBatch(): Promise<void> { const batch = this.buffer.drain();
// Parallel processing await Promise.all([ this.writeToHotStorage(batch), this.updateRealTimeAggregates(batch), this.publishToStreamProcessors(batch) ]); }
private async writeToHotStorage(events: Event[]): Promise<void> { // Write to time-series database for recent data const timeSeries = events.map(e => ({ metric: e.type, timestamp: e.timestamp, value: e.value, tags: e.tags }));
await this.timeSeriesDB.writeBatch(timeSeries); } }
// Query optimization class QueryOptimizer { async optimizeQuery(query: AnalyticsQuery): Promise<OptimizedQuery> { // Analyze query pattern const pattern = this.analyzePattern(query);
// Choose execution strategy if (pattern.isRealTime && pattern.timeRange < 3600) { return this.optimizeForHotPath(query); } else if (pattern.isAggregation) { return this.optimizeForPreAggregates(query); } else { return this.optimizeForColdStorage(query); } }
private optimizeForHotPath(query: AnalyticsQuery): OptimizedQuery { return { executor: 'TimeSeriesDB', indexes: ['timestamp', 'metric_type'], cacheKey: this.generateCacheKey(query), ttl: 60, parallel: true }; } }
// Storage tiering class StorageTiering { async tierData(): Promise<void> { // Hot tier: Last 24 hours in memory // Warm tier: Last 30 days in SSD // Cold tier: Older than 30 days in object storage
const cutoffs = { hot: Date.now() - 24 * 60 * 60 * 1000, warm: Date.now() - 30 * 24 * 60 * 60 * 1000 };
// Move data between tiers await Promise.all([ this.moveToWarmTier(cutoffs.hot), this.moveToColdTier(cutoffs.warm), this.compactColdTier() ]); } }}
# AI creates cloud-native architecture"Design Kubernetes-native application architecture:- Stateless services- ConfigMaps and Secrets- Health checks and probes- Resource limits- Horizontal pod autoscaling"
# AI generates architectureapiVersion: v1kind: ConfigMapmetadata: name: app-configdata: database.host: "postgres.database.svc.cluster.local" cache.nodes: "redis-0.redis:6379,redis-1.redis:6379" features.flags: | { "new-ui": true, "beta-features": false }---apiVersion: apps/v1kind: StatefulSetmetadata: name: app-statefulspec: serviceName: app-stateful replicas: 3 selector: matchLabels: app: app-stateful template: metadata: labels: app: app-stateful annotations: prometheus.io/scrape: "true" prometheus.io/port: "9090" spec: initContainers: - name: init-schema image: migrate/migrate command: ['migrate', '-path', '/migrations', '-database', '$(DATABASE_URL)', 'up'] env: - name: DATABASE_URL valueFrom: secretKeyRef: name: database-secret key: url containers: - name: app image: myapp:latest ports: - containerPort: 8080 name: http - containerPort: 9090 name: metrics env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_IP valueFrom: fieldRef: fieldPath: status.podIP envFrom: - configMapRef: name: app-config - secretRef: name: app-secrets resources: requests: memory: "512Mi" cpu: "500m" limits: memory: "1Gi" cpu: "1000m" livenessProbe: httpGet: path: /health/live port: 8080 initialDelaySeconds: 30 periodSeconds: 10 readinessProbe: httpGet: path: /health/ready port: 8080 initialDelaySeconds: 5 periodSeconds: 5 startupProbe: httpGet: path: /health/startup port: 8080 failureThreshold: 30 periodSeconds: 10 volumeMounts: - name: data mountPath: /data volumeClaimTemplates: - metadata: name: data spec: accessModes: ["ReadWriteOnce"] resources: requests: storage: 10Gi---apiVersion: autoscaling/v2kind: HorizontalPodAutoscalermetadata: name: app-hpaspec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: app-deployment minReplicas: 3 maxReplicas: 100 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 70 - type: Resource resource: name: memory target: type: Utilization averageUtilization: 80 - type: Pods pods: metric: name: http_requests_per_second target: type: AverageValue averageValue: "1000" behavior: scaleDown: stabilizationWindowSeconds: 300 policies: - type: Percent value: 10 periodSeconds: 60 scaleUp: stabilizationWindowSeconds: 0 policies: - type: Percent value: 100 periodSeconds: 15 - type: Pods value: 4 periodSeconds: 15 selectPolicy: Max
Modern architecture design is rarely a solo endeavor. Cursor IDE’s collaboration features enable teams to design, review, and implement architectures together, ensuring alignment and knowledge sharing across the organization.
// Team discusses architecture in Slack"@Cursor analyze the proposed microservices split in this thread"
// Cursor reads entire Slack thread, creates:// - Architecture diagram// - Service boundaries analysis// - Team concerns addressed// - Alternative approaches
Benefits:
---description: Team Architecture StandardsalwaysApply: true---- All services must follow hexagonal architecture- Use event sourcing for audit-critical services- API versioning required from v1.0- Each service owns its data (no shared DBs)- Document ADRs for major decisions
Apply across team:
// BugBot configuration for architecture# .cursor/BUGBOT.md## Architecture Review Points- Check for service boundary violations- Verify no synchronous circular dependencies- Ensure proper error handling patterns- Validate scaling assumptions- Review security boundaries
Automated checks:
When implementing complex architectures across teams:
Divide by Domain
# Team A: User Domaincursor --user-data-dir ~/.cursor-user-service services/user# "Implement user service following architecture doc"
# Team B: Order Domaincursor --user-data-dir ~/.cursor-order-service services/order# "Implement order service with event sourcing"
# Team C: API Gatewaycursor --user-data-dir ~/.cursor-gateway gateway/# "Implement GraphQL gateway aggregating services"
Share Context via MCP
// Shared .cursor/mcp.json{ "mcpServers": { "confluence": { "command": "confluence-mcp", "args": ["--space", "ARCH"], "env": { "CONFLUENCE_TOKEN": "$CONFLUENCE_TOKEN" } }, "figma": { "command": "figma-mcp", "args": ["--file", "architecture-diagrams"] } }}
Synchronize via Background Agents
// Each team runs background agent"Monitor other services for interface changes, update our service contracts accordingly"
// Agent watches git, updates interfaces// Posts to Slack when conflicts detected
Collaborative ADR Process
1. **Propose** (Any team member) "Draft ADR for switching to event-driven architecture"
2. **Discuss** (Team reviews in PR) - AI summarizes implications - BugBot checks for conflicts - Team comments/votes
3. **Implement** (Multiple developers) - Each takes specific services - AI ensures ADR compliance - Automated verification
4. **Document** (AI-assisted) "Update architecture docs based on ADR-007"
Start Simple
Begin with simple architecture and evolve based on needs
Design for Failure
Assume components will fail and design accordingly
Monitor Everything
Comprehensive monitoring from day one
Document Decisions
Record architectural decisions and rationale
DevOps Integration
Implement CI/CD for your architecture
Performance Testing
Validate architecture under load
Security Review
Conduct security architecture review