-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Development Patterns
This guide covers proven patterns for effective development with Claude Flow, including best practices, common anti-patterns to avoid, and real-world examples.
Always deploy multiple agents simultaneously for maximum efficiency.
# Deploy complete development team concurrently
npx claude-flow task orchestrate \
--task "Implement user dashboard feature" \
--agents "planner,architect,coder,tester,reviewer" \
--strategy parallel
# Sequential deployment wastes time
npx claude-flow agent spawn planner --task "Plan feature"
# Wait for completion...
npx claude-flow agent spawn coder --task "Implement feature"
# Wait for completion...
npx claude-flow agent spawn tester --task "Test feature"
Use persistent memory to maintain context across sessions.
// Store architectural decisions
await claudeFlow.memory.store({
namespace: 'architecture/decisions',
key: 'database-choice',
value: {
decision: 'PostgreSQL',
rationale: 'ACID compliance required',
alternatives: ['MongoDB', 'DynamoDB'],
date: new Date()
},
ttl: null // Permanent storage
});
// Reference in future sessions
const dbDecision = await claudeFlow.memory.retrieve({
namespace: 'architecture/decisions',
key: 'database-choice'
});
// Losing context between sessions
// No memory storage, decisions lost after session ends
const dbChoice = 'PostgreSQL'; // Lost when session ends
Deploy testing agents before implementation agents.
# TDD swarm deployment
npx claude-flow swarm init tdd-swarm \
--topology mesh \
--agents "tester:3,tdd-london-swarm:1,coder:2,reviewer:1"
# Tests created first, then implementation
npx claude-flow task orchestrate \
--task "Build shopping cart with TDD" \
--sequence "tests-first"
# Implementation without tests
npx claude-flow agent spawn coder \
--task "Build shopping cart"
# Tests added as afterthought
Use hierarchical topology for complex projects with clear leadership needs.
const swarmConfig = {
topology: 'hierarchical',
queen: {
type: 'hierarchical-coordinator',
responsibilities: ['task-distribution', 'conflict-resolution']
},
workers: [
{ type: 'coder', count: 5 },
{ type: 'tester', count: 3 },
{ type: 'reviewer', count: 2 }
],
communication: 'queen-mediated'
};
Implement checkpoints for long-running operations.
const workflow = {
stages: [
{
name: 'data-migration',
checkpoint: true,
rollback: 'automatic',
validation: {
type: 'row-count',
tolerance: 0.001
}
},
{
name: 'schema-update',
checkpoint: true,
preCheck: 'backup-exists',
postCheck: 'integrity-verified'
}
]
};
Structure microservices with proper boundaries and communication.
const microservicePattern = {
name: 'user-service',
structure: {
api: {
routes: 'src/api/routes',
middleware: 'src/api/middleware',
validation: 'src/api/validation'
},
business: {
services: 'src/services',
models: 'src/models',
rules: 'src/business-rules'
},
data: {
repositories: 'src/repositories',
entities: 'src/entities',
migrations: 'src/migrations'
}
},
communication: {
sync: 'REST',
async: 'RabbitMQ',
events: 'EventBus'
}
};
Implement loosely coupled, event-driven systems.
// Event sourcing with CQRS
const eventDrivenPattern = {
commands: {
CreateOrder: {
handler: 'OrderCommandHandler',
validation: 'CreateOrderValidator',
events: ['OrderCreated']
}
},
events: {
OrderCreated: {
projections: ['OrderReadModel', 'InventoryProjection'],
subscribers: ['EmailService', 'AnalyticsService']
}
},
readModels: {
OrderReadModel: {
storage: 'PostgreSQL',
updates: 'EventProjector'
}
}
};
Manage data access with proper abstraction.
// Repository pattern implementation
interface IUserRepository {
findById(id: string): Promise<User>;
findByEmail(email: string): Promise<User>;
save(user: User): Promise<void>;
}
class UserRepository implements IUserRepository {
constructor(private uow: IUnitOfWork) {}
async save(user: User): Promise<void> {
await this.uow.users.add(user);
await this.uow.commit();
}
}
// Unit of Work pattern
class UnitOfWork implements IUnitOfWork {
private _users: UserRepository;
get users(): IUserRepository {
return this._users ??= new UserRepository(this);
}
async commit(): Promise<void> {
await this.db.transaction(async (trx) => {
// Commit all changes
});
}
}
Layer tests appropriately for maximum coverage and speed.
const testStrategy = {
unit: {
coverage: 85,
tools: ['Jest', 'Vitest'],
pattern: 'src/**/*.test.ts',
parallel: true
},
integration: {
coverage: 70,
tools: ['Jest', 'Supertest'],
pattern: 'tests/integration/**/*.test.ts',
database: 'test-containers'
},
e2e: {
coverage: 'critical-paths',
tools: ['Playwright', 'Cypress'],
pattern: 'tests/e2e/**/*.spec.ts',
environment: 'staging'
}
};
Use appropriate mocking strategies based on testing goals.
// London School - Mock everything
describe('OrderService (London)', () => {
let orderService;
let mockRepo, mockPayment, mockInventory;
beforeEach(() => {
mockRepo = createMock<IOrderRepository>();
mockPayment = createMock<IPaymentService>();
mockInventory = createMock<IInventoryService>();
orderService = new OrderService(mockRepo, mockPayment, mockInventory);
});
test('should process order with all dependencies', async () => {
// Test interactions, not implementation
when(mockInventory.checkStock(any())).thenResolve(true);
when(mockPayment.charge(any())).thenResolve({ id: 'pay_123' });
await orderService.placeOrder(orderData);
verify(mockInventory.checkStock(orderData.items)).once();
verify(mockPayment.charge(orderData.total)).once();
verify(mockRepo.save(any())).once();
});
});
// Chicago School - Use real implementations
describe('OrderService (Chicago)', () => {
let orderService;
let database;
beforeEach(async () => {
database = await createTestDatabase();
const repo = new OrderRepository(database);
const inventory = new InventoryService(database);
const payment = new MockPaymentService(); // Only mock external
orderService = new OrderService(repo, payment, inventory);
});
test('should create order with real data', async () => {
const order = await orderService.placeOrder(orderData);
// Test actual state changes
expect(order.id).toBeDefined();
expect(order.status).toBe('completed');
const savedOrder = await database.orders.findById(order.id);
expect(savedOrder).toMatchObject(order);
});
});
Handle failures without complete system breakdown.
class ResilientService {
async fetchUserData(userId) {
try {
// Primary data source
return await this.primaryDB.getUser(userId);
} catch (primaryError) {
this.logger.warn('Primary DB failed', primaryError);
try {
// Fallback to cache
const cached = await this.cache.getUser(userId);
if (cached && this.isFreshEnough(cached)) {
return cached;
}
} catch (cacheError) {
this.logger.warn('Cache failed', cacheError);
}
try {
// Last resort - read replica
return await this.replicaDB.getUser(userId);
} catch (replicaError) {
// Circuit breaker pattern
this.circuitBreaker.recordFailure();
throw new ServiceUnavailableError('All data sources failed');
}
}
}
}
Prevent cascading failures in distributed systems.
class CircuitBreaker {
constructor(options = {}) {
this.failureThreshold = options.failureThreshold || 5;
this.resetTimeout = options.resetTimeout || 60000;
this.state = 'CLOSED';
this.failures = 0;
this.nextAttempt = Date.now();
}
async execute(operation) {
if (this.state === 'OPEN') {
if (Date.now() < this.nextAttempt) {
throw new Error('Circuit breaker is OPEN');
}
this.state = 'HALF_OPEN';
}
try {
const result = await operation();
this.onSuccess();
return result;
} catch (error) {
this.onFailure();
throw error;
}
}
onSuccess() {
this.failures = 0;
this.state = 'CLOSED';
}
onFailure() {
this.failures++;
if (this.failures >= this.failureThreshold) {
this.state = 'OPEN';
this.nextAttempt = Date.now() + this.resetTimeout;
}
}
}
Implement multi-level caching for optimal performance.
class CacheStrategy {
constructor() {
this.l1Cache = new MemoryCache({ max: 1000, ttl: 60 });
this.l2Cache = new RedisCache({ ttl: 3600 });
this.l3Cache = new CDNCache({ ttl: 86400 });
}
async get(key, fetcher) {
// L1 - Memory cache (fastest)
let value = await this.l1Cache.get(key);
if (value) return value;
// L2 - Redis cache
value = await this.l2Cache.get(key);
if (value) {
await this.l1Cache.set(key, value);
return value;
}
// L3 - CDN cache
value = await this.l3Cache.get(key);
if (value) {
await this.promoteToFasterCaches(key, value);
return value;
}
// Fetch from source
value = await fetcher();
await this.setAllCaches(key, value);
return value;
}
}
Optimize database access patterns.
class OptimizedRepository {
// N+1 query prevention
async getUsersWithPosts(userIds) {
// Bad: N+1 queries
// const users = await db.users.findMany({ id: { in: userIds } });
// for (const user of users) {
// user.posts = await db.posts.findMany({ userId: user.id });
// }
// Good: Single query with join
const users = await db.users.findMany({
where: { id: { in: userIds } },
include: {
posts: {
orderBy: { createdAt: 'desc' },
take: 10
}
}
});
return users;
}
// Batch loading pattern
async batchLoadUsers(userIds) {
const uniqueIds = [...new Set(userIds)];
const users = await db.users.findMany({
where: { id: { in: uniqueIds } }
});
const userMap = new Map(users.map(u => [u.id, u]));
return userIds.map(id => userMap.get(id));
}
}
Layer security measures for comprehensive protection.
class SecureService {
async processRequest(request) {
// Layer 1: Rate limiting
await this.rateLimiter.check(request.ip);
// Layer 2: Authentication
const user = await this.authenticator.verify(request.token);
// Layer 3: Authorization
await this.authorizer.check(user, request.resource, request.action);
// Layer 4: Input validation
const validated = await this.validator.validate(request.data);
// Layer 5: SQL injection prevention
const sanitized = this.sanitizer.clean(validated);
// Layer 6: Audit logging
await this.auditLogger.log(user, request.action, sanitized);
// Process request
return await this.processor.handle(sanitized);
}
}
Handle sensitive tokens and secrets safely.
class TokenManager {
constructor() {
this.encryption = new AES256Encryption(process.env.ENCRYPTION_KEY);
}
async storeToken(userId, token) {
// Never store plain text tokens
const encrypted = await this.encryption.encrypt(token);
const hash = await this.hash(token);
await db.tokens.create({
userId,
tokenHash: hash,
encryptedToken: encrypted,
expiresAt: new Date(Date.now() + 3600000)
});
}
async validateToken(token) {
const hash = await this.hash(token);
const stored = await db.tokens.findFirst({
where: {
tokenHash: hash,
expiresAt: { gt: new Date() }
}
});
if (!stored) return false;
// Constant-time comparison
return crypto.timingSafeEqual(
Buffer.from(hash),
Buffer.from(stored.tokenHash)
);
}
}
Zero-downtime deployments with instant rollback.
const blueGreenDeployment = {
stages: [
{
name: 'deploy-green',
actions: [
'Build new version',
'Deploy to green environment',
'Run smoke tests'
]
},
{
name: 'validate-green',
actions: [
'Run integration tests',
'Performance benchmarks',
'Security scan'
],
rollbackOn: 'any-failure'
},
{
name: 'switch-traffic',
actions: [
'Update load balancer',
'Monitor error rates',
'Check performance metrics'
],
rollbackDelay: 300 // 5 minutes
},
{
name: 'cleanup',
actions: [
'Remove blue environment',
'Update DNS records',
'Clear CDN cache'
]
}
]
};
Control feature rollout and enable A/B testing.
class FeatureFlags {
async isEnabled(feature, context = {}) {
const flag = await this.getFlag(feature);
if (!flag || !flag.enabled) return false;
// Percentage rollout
if (flag.percentage < 100) {
const hash = this.hash(context.userId || context.sessionId);
return (hash % 100) < flag.percentage;
}
// User targeting
if (flag.targetUsers?.includes(context.userId)) {
return true;
}
// Group targeting
if (flag.targetGroups?.some(g => context.groups?.includes(g))) {
return true;
}
return flag.default;
}
}
// Usage in code
if (await featureFlags.isEnabled('new-checkout-flow', { userId })) {
return renderNewCheckout();
} else {
return renderLegacyCheckout();
}
❌ Don't: Deploy agents one by one ✅ Do: Deploy all related agents concurrently
❌ Don't: Lose context between sessions ✅ Do: Use memory system for persistence
❌ Don't: Refactor entire system at once ✅ Do: Incremental refactoring with tests
❌ Don't: Direct dependencies between services ✅ Do: Use events and interfaces
❌ Don't: Assume happy path only ✅ Do: Handle errors at every level
❌ Don't: Optimize without metrics ✅ Do: Measure, then optimize bottlenecks
❌ Don't: Add security after development ✅ Do: Build security in from the start
# Complete e-commerce platform with Claude Flow
npx claude-flow swarm init ecommerce \
--topology hierarchical \
--max-agents 15
# Deploy specialized teams
npx claude-flow task orchestrate \
--task "Build e-commerce platform" \
--teams "frontend:3,backend:4,database:2,testing:3,devops:2,security:1"
# Implement with patterns
npx claude-flow sparc pipeline \
--task "Implement checkout flow" \
--patterns "repository,unit-of-work,event-sourcing,cqrs"
# Migrate monolith to microservices
npx claude-flow agent spawn migration-planner \
--task "Plan monolith decomposition" \
--strategy "strangler-fig"
# Execute migration with safety
npx claude-flow workflow execute \
--template "safe-migration" \
--checkpoints "after-each-service" \
--rollback "automatic"
# Build real-time analytics
npx claude-flow swarm init analytics \
--topology mesh \
--agents "data-engineer:3,ml-developer:2,backend-dev:3,performance-benchmarker:1"
# Implement with performance patterns
npx claude-flow task orchestrate \
--task "Build real-time dashboard" \
--patterns "caching,streaming,event-driven"
- Always Think Concurrent: Deploy agents and operations in parallel
- Memory-First: Store important decisions and context
- Test-Driven: Write tests before implementation
- Pattern-Based: Use established patterns, avoid anti-patterns
- Security-Aware: Build security in from the start
- Performance-Conscious: Measure and optimize based on data
- Error-Resilient: Plan for failures at every level
- Documentation-Rich: Document decisions and patterns
- Review SPARC Methodology for structured development
- Explore API Reference for detailed commands
- Check Troubleshooting for common issues