Comunicación Entre Servicios
Comunicación Entre Servicios
1. Descripción General de Arquitectura de Comunicación
Los microservicios de Algesta se comunican usando una arquitectura event-driven con patrones de comunicación basados en mensajes. Todas las solicitudes de clientes externos se enrutan a través del API Gateway, que sirve como el único punto de entrada al sistema.
Estrategia de Comunicación:
- Patrón: Arquitectura event-driven con decoradores MessagePattern
- Capa de Transporte: Redis (desarrollo) y Apache Kafka (producción)
- Protocolo: Microservicios NestJS con @nestjs/microservices
- Formato de Mensaje: Payloads JSON con metadata e IDs de correlación
- Estilo de Comunicación: Asíncrona (event-driven) y síncrona (request-response)
Principios de Diseño:
- Acoplamiento Débil: Los servicios se comunican a través de mensajes, no llamadas directas
- Procesamiento Asíncrono: Manejo de mensajes no bloqueante
- Confiabilidad: Persistencia de mensajes y mecanismos de reintento
- Escalabilidad: Escalamiento horizontal de consumidores de mensajes
- Observabilidad: IDs de correlación para trazado distribuido
1.1. API Gateway as Communication Hub
The API Gateway serves as the primary communication hub between external clients and backend Microservicios. For detailed information on the gateway Arquitectura, see API Gateway Arquitectura.
Gateway-to-Microservicio Communication
- Transport Layer: Configurable via
MESSAGING_TYPEenvironment variable- Redis: Request-response pattern using Redis pub/sub
- Kafka: Event-driven communication with consumer groups
- Client Proxy: NestJS ClientProxy for sending messages to Microservicios
- Message Patterns: String-based patterns for routing (e.g., ‘auth.login’, ‘orders.create’)
- Timeout Handling: Configurable timeouts with fallback mechanisms
- Circuit Breaker: Prevents cascading failures when Microservicios are unavailable
Configuration
The gateway registers client connections for each Microservicio in app.module.ts:
- MS_AUTH: Authentication and user management
- MS_ORDERS: Order lifecycle and management
- MS_MESSAGES: Messaging and notifications
- MS_PROVIDER: Provider management and auctions
- MS_NOTIFICATIONS: Notification delivery
Each Microservicio connection uses the same transport configuration (Redis or Kafka) defined in config.transport.ts of the API Gateway.
Request Flow
sequenceDiagram
participant Client
participant Gateway as API Gateway
participant Guard as JwtAuthGuard
participant Controller
participant Handler as Command/Query Handler
participant MS as Microservice
Client->>Gateway: HTTP Request + JWT Token
Gateway->>Guard: Validate Token
Guard->>MS: Validate with MS_AUTH
MS-->>Guard: Token Valid + User Data
Guard-->>Gateway: Attach User to Request
Gateway->>Controller: Route to Controller
Controller->>Handler: Execute Command/Query
Handler->>MS: Send Message (Redis/Kafka)
MS-->>Handler: Response
Handler-->>Controller: Return Result
Controller-->>Gateway: Format Response
Gateway-->>Client: HTTP Response + TraceId
Gateway Message Patterns Used:
auth.login→ MS_AUTH: User authenticationauth.validateToken→ MS_AUTH: Token validationorders.create-order→ MS_ORDERS: Create new orderorders.getAll→ MS_ORDERS: List all ordersnotification.send-email→ MS_NOTIFICATIONS: Send notification emailauction.publish→ MS_PROVIDER: Publish order to auctionprovider.get-by-id→ MS_PROVIDER: Get provider details
For detailed information on gateway routing, authentication, and resilience patterns, see:
- API Gateway Arquitectura
- API Gateway Authentication
- API Gateway Resilience Patterns
- API Gateway API Reference
2. Message Pattern Convention
Algesta uses a consistent naming convention for message patterns to ensure clarity and discoverability.
Pattern Naming Format
Format: service.action or service.entity.action
Componentes:
- service: Target Microservicio name (orders, notification, provider, auction, users, Documentos, services)
- entity: Optional entity name for specific resource Operaciones
- action: Operation to perform (create, get, update, delete, list, etc.)
Pattern Examples
Orders Microservicio:
orders.create-order: Create a new orderorders.get-by-id: Get order by IDorders.service.getAll: List all ordersorders.update-info: Update order informationorders.publish-order: Publish order to auction
Notifications Microservicio:
notification.create-notification: Create notificationnotification.send-order-reminder: Send order reminder emailquotation.approved: Send quotation approval notification
Provider Microservicio:
provider.create-provider: Create provider accountprovider.get-by-identification: Get provider by IDprovider.initial-inspection: Submit initial inspectionauction.publish: Publish auctionauction.create-offer: Create auction bidauction.assign-provider: Assign provider to orderusers.list-providers: List all providersDocumentos.create: Create Documento
Reference: algesta-ms-orders-nestjs/src/shared/config, algesta-ms-notifications-nestjs/src/shared/config, algesta-ms-provider-nestjs/src/shared/config
3. Message Structure
All messages follow a standard structure for consistency and traceability.
Request Payload
interface MessageRequest<T = any> { messageId: string; // Unique message ID for idempotency (UUID v4) timestamp: string; // ISO 8601 timestamp of message creation data: T; // Actual payload (DTO) metadata?: { ipAddress?: string; // Client IP address userAgent?: string; // Client user agent requestedBy?: string; // User who initiated the request correlationId?: string; // Correlation ID for distributed tracing }; user?: { // Authenticated user information userId: string; email: string; role: string; // ADMIN, AGENT, PROVIDER, CLIENT, TECHNICIAN identification?: string; name?: string; };}Example Request:
{ "messageId": "f47ac10b-58cc-4372-a567-0e02b2c3d479", "timestamp": "2025-11-19T10:30:00.000Z", "data": { "orderId": "ORDER-12345", "service": "Air Conditioning Repair", "address": "Calle 123 #45-67, Bogotá" }, "metadata": { "ipAddress": "192.168.1.100", "userAgent": "Mozilla/5.0...", "requestedBy": "admin@algesta.com", "correlationId": "trace-abc123" }, "user": { "userId": "user-001", "email": "admin@algesta.com", "role": "ADMIN", "name": "Admin User" }}Response Payload
interface MessageResponse<T = any> { success: boolean; // Operation success status message: string; // Human-readable message data?: T; // Response data (optional) errors?: string[]; // Error messages if any (optional) metadata?: { processedAt?: string; // ISO 8601 timestamp of processing processingTime?: number; // Processing time in milliseconds };}Example Success Response:
{ "success": true, "message": "Order created successfully", "data": { "orderId": "ORDER-12345", "status": "NEW", "createdAt": "2025-11-19T10:30:05.123Z" }, "metadata": { "processedAt": "2025-11-19T10:30:05.456Z", "processingTime": 333 }}Example Error Response:
{ "success": false, "message": "Failed to create order", "errors": ["Invalid email address", "Phone number is required"], "metadata": { "processedAt": "2025-11-19T10:30:01.123Z", "processingTime": 10 }}4. Transport Configuration
Algesta uses different message transport layers for development and production.
Redis Configuration (Development)
Propósito: Lightweight message broker for local development
Configuration (as implemented in config.transport.ts):
export const REDIS: RedisOptions = { transport: Transport.REDIS, options: { host: process.env.REDIS_HOST || "localhost", port: process.env.REDIS_PORT ? parseInt(process.env.REDIS_PORT, 10) : 6379, password: process.env.REDIS_PASSWORD || undefined, },};Funcionalidades:
- Pub/Sub: Publish-subscribe messaging pattern
- Channels: Topic-based message routing
- Connection Pooling: Reuse connections for performance
- Optional Authentication: Password-based auth via
REDIS_PASSWORDenvironment variable
Redis Channels:
- One channel per message pattern (e.g.,
orders.create-order,notification.send-order-reminder)
Reference: algesta-ms-orders-nestjs/src/shared/config/config.transport.ts (similar in other Microservicios)
Kafka Configuration (Production)
Propósito: Distributed message broker for production scalability
Configuration (as implemented in config.transport.ts):
export const KAFKA: KafkaOptions = { transport: Transport.KAFKA, options: { client: { clientId: process.env.KAFKA_CLIENT_ID || "ms-documents", brokers: (process.env.KAFKA_BROKERS || "localhost:9092").split(","), }, consumer: { groupId: process.env.KAFKA_GROUP_ID || "ms-documents-consumer", }, run: { autoCommit: true, }, },};Configuration Notes:
- Client ID: Configurable via
KAFKA_CLIENT_IDenvironment variable (defaults to service-specific Valor) - Brokers: Comma-separated list via
KAFKA_BROKERSenvironment variable (defaults tolocalhost:9092) - Consumer Group: Configurable via
KAFKA_GROUP_IDenvironment variable (defaults to service-specific consumer group) - Auto Commit: Enabled (
autoCommit: true) for automatic offset management
Environment Variables:
KAFKA_CLIENT_ID: Unique client identifier for each MicroservicioKAFKA_BROKERS: Comma-separated Kafka broker addresses (e.g.,broker1:9092,broker2:9092)KAFKA_GROUP_ID: Consumer group ID for load balancing
Funcionalidades:
- Topics: Configured per service (not explicitly defined in transport config; determined by MessagePattern routing)
- Consumer Groups: Load balancing across service instances via consumer group ID
- Auto Commit: Automatic offset management for message acknowledgment
Reference:
algesta-ms-orders-nestjs/src/shared/config/config.transport.tsalgesta-ms-notifications-nestjs/src/shared/config/config.transport.tsalgesta-ms-provider-nestjs/src/shared/config/config.transport.ts
5. Service Communication Matrix
Detailed mapping of all inter-service communications in the Algesta platform.
Orders ↔ Notifications
| Source | Target | Pattern | Purpose | Trigger | Frequency |
|---|---|---|---|---|---|
| Orders MS | Notifications MS | notification.send-order-reminder | Send order reminder to client | Order created or updated | On-demand |
| Orders MS | Notifications MS | quotation.approved | Notify quote approval | Quote approved by client | Event-driven |
| Orders MS | Notifications MS | notification.create-notification | Create notification record | Various order events | Event-driven |
Orders ↔ Provider
| Source | Target | Pattern | Purpose | Trigger | Frequency |
|---|---|---|---|---|---|
| Orders MS | Provider MS | auction.publish | Publish order to auction | Order ready for auction | On-demand |
| Provider MS | Orders MS | orders.update-info | Update order Estado | Provider assigned | Event-driven |
| Provider MS | Orders MS | orders.get-by-id | Get order details | Provider needs order info | On-demand |
Provider ↔ Notifications
| Source | Target | Pattern | Purpose | Trigger | Frequency |
|---|---|---|---|---|---|
| Provider MS | Notifications MS | notification.create-notification | Create notification | Auction events | Event-driven |
| Provider MS | Notifications MS | notification.send-order-reminder | Send reminder to provider | Order assignment | Event-driven |
API Gateway ↔ All Services
| Source | Target | Pattern | Purpose | Trigger | Frequency |
|---|---|---|---|---|---|
| API Gateway | Orders MS | orders.create-order | Create new order | Client request | On-demand |
| API Gateway | Orders MS | orders.service.getAll | List orders | Client/admin request | On-demand |
| API Gateway | Provider MS | provider.create-provider | Register provider | Provider signup | On-demand |
| API Gateway | Provider MS | auction.list | List auctions | Admin request | On-demand |
| API Gateway | Notifications MS | notification.find-all-notifications | List notifications | User request | On-demand |
6. Communication Patterns
Algesta implements multiple communication patterns for different use cases.
Request-Response Pattern (Synchronous)
Use Case: Operaciones requiring immediate response
Characteristics:
- Client waits for response
- Timeout handling (2-10 seconds)
- Used for CRUD Operaciones and queries
Example:
// API Gateway sends request and waits for response@Get('/orders/:id')async getOrder(@Param('id') orderId: string) { const response = await this.ordersClient .send('orders.get-by-id', { orderId }) .toPromise(); return response;}Flow Diagram:
sequenceDiagram
participant Client
participant Gateway as API Gateway
participant Orders as Orders MS
Client->>Gateway: GET /orders/123
Gateway->>Orders: orders.get-by-id (MessagePattern)
Orders->>Orders: Query database
Orders-->>Gateway: Order data
Gateway-->>Client: HTTP 200 + Order JSON
Event-Driven Pattern (Asynchronous)
Use Case: Fire-and-forget Operaciones, notifications, audit logs
Characteristics:
- No waiting for response
- Used for notifications, logging, background tasks
- Eventual consistency
Example:
// Orders MS publishes eventthis.kafkaClient.emit('order.created', { orderId: order.orderId, service: order.service, client: order.emailContact,});
// Notifications MS subscribes to event@EventPattern('order.created')async handleOrderCreated(@Payload() data: OrderCreatedEvent) { await this.sendWelcomeEmail(data);}Flow Diagram:
sequenceDiagram
participant Orders as Orders MS
participant Kafka
participant Notifications as Notifications MS
participant SendGrid
Orders->>Kafka: Publish order.created event
Note over Orders: Does not wait for response
Kafka->>Notifications: Deliver event
Notifications->>SendGrid: Send email
SendGrid-->>Notifications: Email sent
Note over Notifications: Event processed independently
Saga Pattern (Distributed Transactions)
Use Case: Coordinated multi-service Operaciones with compensation
Characteristics:
- Orchestrates multiple services
- Compensating transactions on failure
- Used for complex workflows (order → auction → assignment)
Example: Order Publication to Auction
Flow Diagram:
sequenceDiagram
participant Admin
participant Orders as Orders MS
participant Provider as Provider MS
participant Notifications as Notifications MS
participant Kafka
Admin->>Orders: Publish order to auction
Orders->>Orders: Validate order
Orders->>Provider: auction.publish
Provider->>Provider: Create auction record
Provider-->>Orders: Auction created
Orders->>Orders: Update order status (AUCTION)
Orders->>Kafka: Publish order.auction.published
Kafka->>Notifications: order.auction.published
Notifications->>Notifications: Notify eligible providers
Note over Orders,Provider: If auction creation fails
alt Auction creation fails
Provider-->>Orders: Error response
Orders->>Orders: Rollback order status
Orders->>Notifications: Send failure notification
end
Saga Steps:
- Orders MS: Update order Estado to AUCTION
- Provider MS: Create auction record
- Orders MS: Confirm auction creation
- Notifications MS: Notify providers
- Compensation: If step fails, rollback previous steps
7. Message Flow Diagrams
Completo Order Creation Flow
sequenceDiagram
participant Client
participant Gateway as API Gateway
participant Orders as Orders MS
participant MongoDB as Orders DB
participant Kafka
participant Notifications as Notifications MS
participant SendGrid
Client->>Gateway: POST /orders (REST)
Gateway->>Orders: orders.create-order (MessagePattern)
activate Orders
Orders->>Orders: Validate CreateOrderDto
Orders->>Orders: Execute CreateOrderCommand
Orders->>MongoDB: db.orders.insertOne()
MongoDB-->>Orders: Order document created
Orders->>MongoDB: db.order-history.insertOne()
Orders->>Kafka: Publish order.created event
Orders-->>Gateway: CreateOrderResponseDto
deactivate Orders
Gateway-->>Client: HTTP 201 + Order JSON
Kafka->>Notifications: order.created event
activate Notifications
Notifications->>Notifications: Load email template
Notifications->>Notifications: Render with order data
Notifications->>SendGrid: Send welcome email
SendGrid-->>Notifications: Email sent (202)
Notifications->>MongoDB: Save notification record
deactivate Notifications
Auction Publication and Bidding Flow
sequenceDiagram
participant Admin
participant Orders as Orders MS
participant Provider as Provider MS
participant ProviderDB as Provider DB
participant Kafka
participant Notifications as Notifications MS
participant ProviderUser as Provider
Admin->>Orders: Publish order to auction
Orders->>Provider: auction.publish (MessagePattern)
activate Provider
Provider->>Provider: Validate auction eligibility
Provider->>ProviderDB: db.auction.insertOne()
ProviderDB-->>Provider: Auction created
Provider->>Kafka: Publish auction.created event
Provider-->>Orders: {success: true, auctionId}
deactivate Provider
Orders->>Orders: Update order.auction = true
Kafka->>Notifications: auction.created event
Notifications->>Notifications: Get eligible providers
Notifications->>SendGrid: Send auction notifications
SendGrid->>ProviderUser: Email: New auction available
ProviderUser->>Provider: auction.create-offer (MessagePattern)
activate Provider
Provider->>Provider: Validate provider eligibility
Provider->>ProviderDB: db.auction.updateOne() - Add bid
ProviderDB-->>Provider: Bid added
Provider->>Kafka: Publish auction.bid.created event
Provider-->>ProviderUser: {success: true, bidId}
deactivate Provider
Note over Admin,Provider: Auction closes
Admin->>Provider: auction.assign-provider
activate Provider
Provider->>ProviderDB: Update bid (isAssigned=true)
Provider->>Orders: orders.update-info (ASSIGNED)
Provider->>Kafka: Publish provider.assigned event
Provider-->>Admin: {success: true}
deactivate Provider
Kafka->>Notifications: provider.assigned event
Notifications->>SendGrid: Notify client and provider
Provider Workflow Flow
sequenceDiagram
participant Provider
participant ProviderMS as Provider MS
participant OrdersMS as Orders MS
participant Notifications as Notifications MS
participant Client
Note over Provider,Client: Phase 1: Initial Inspection
Provider->>ProviderMS: provider.initial-inspection
ProviderMS->>ProviderMS: Save inspection data
ProviderMS->>OrdersMS: orders.update-info (INSPECTION_COMPLETED)
ProviderMS->>Notifications: Create notification
Notifications->>Client: Email: Inspection completed
Note over Provider,Client: Phase 2: Work Start
Provider->>ProviderMS: confirm-work-start
ProviderMS->>ProviderMS: Update work status
ProviderMS->>OrdersMS: orders.update-info (WORK_STARTED)
Note over Provider,Client: Phase 3: Work Completion
Provider->>ProviderMS: work-completion-report
ProviderMS->>ProviderMS: Save completion report
ProviderMS->>OrdersMS: orders.update-info (WORK_COMPLETED)
ProviderMS->>Notifications: Create notification
Notifications->>Client: Email: Work completed
Note over Provider,Client: Phase 4: Final Delivery
Provider->>ProviderMS: provider.final-delivery
ProviderMS->>ProviderMS: Save delivery data
ProviderMS->>OrdersMS: orders.update-info (PENDING_DELIVERY)
ProviderMS->>Notifications: Create notification
Notifications->>Client: Email: Approval required
Client->>OrdersMS: Approve final delivery
OrdersMS->>OrdersMS: Update order (COMPLETED)
OrdersMS->>Notifications: Create notification
Notifications->>Provider: Email: Order completed
8. Error Handling and Retry Logic
Comprehensive error handling ensures message delivery reliability.
Retry Strategy
Implementación Estado: ⚠️ Partially Implemented - The current transport configuration files do not explicitly define retryAttempts or retryDelay parameters. Retry behavior depends on the default settings of the underlying NestJS Microservicios transport library (Redis/Kafka).
Current Configuration (as seen in actual transport config files):
- Redis: No explicit retry configuration; relies on NestJS Redis transport defaults
- Kafka: Uses
autoCommit: truefor automatic offset management; no explicit retry configuration
Retry Behavior:
- Retry logic depends on the NestJS Microservicios transport library defaults
- For Redis: NestJS may apply default retry behavior based on the
redisorioredisclient - For Kafka: Message acknowledgment is automatic (
autoCommit: true); failed messages may not be retried unless application-level retry logic is implemented
Recommended Enhancement: Add explicit retry configuration to transport options:
options: { // Redis retryAttempts: 5, retryDelay: 1000, // Or Kafka retry: { retries: 5, initialRetryTime: 100, multiplier: 2, }}Reference:
- Current config:
algesta-ms-orders-nestjs/src/shared/config/config.transport.ts - NestJS Microservicios docs: https://docs.nestjs.com/Microservicios/redis and https://docs.nestjs.com/Microservicios/kafka
Error Types
1. Transient Errors (Retry):
- Network timeout
- Service temporarily unavailable
- Base de datos connection timeout
- Rate limit exceeded
Action: Retry with exponential backoff
2. Permanent Errors (No Retry):
- Validation errors (invalid DTO)
- Business logic errors (order not found)
- Authentication errors (invalid token)
- Authorization errors (insufficient permissions)
Action: Return error to client, log, no retry
3. Timeout Errors:
- Message processing timeout (default: 30s)
- Base de datos query timeout
- External API timeout
Action: Retry with circuit breaker
Circuit Breaker Pattern
Implementación Estado: ✅ Implemented - Circuit breaker functionality is available in all Microservicios via the HealthModule and can be monitored at GET /health/circuit-breaker.
States:
- Closed (Normal): All requests processed
- Open (Failing): Stop sending requests after threshold
- Half-Open (Pruebas): Send single test request
Configuration (as implemented in HealthService):
- Failure Threshold: 5 consecutive failures trigger open state
- Recovery Timeout: 60 seconds (60000ms) in open state before transitioning to half-open
- Monitoring Period: 5 minutes (300000ms) for failure tracking
Monitoring: Circuit breaker Estado can be checked via:
GET /health/circuit-breakerExample Response:
{ "service": "ms-orders", "circuitBreaker": { "state": "CLOSED", "failureCount": 0, "lastFailureTime": null, "nextAttemptTime": null, "options": { "failureThreshold": 5, "recoveryTimeout": 60000, "monitoringPeriod": 300000 } }, "timestamp": "2025-11-19T10:30:00.000Z"}Reference: algesta-ms-orders-nestjs/src/shared/health/health.service.ts and similar files in other Microservicios.
Dead Letter Queue (DLQ)
Implementación Estado: ⚠️ Recommended Pattern - DLQ topics are not currently configured in the codebase. This is a recommended future enhancement for production Despliegues.
Recommended DLQ Configuration (for future Implementación):
DLQ Topics:
algesta-orders-dlq: Failed Orders MS messagesalgesta-notifications-dlq: Failed Notifications MS messagesalgesta-provider-dlq: Failed Provider MS messages
Recommended DLQ Processing:
- Message fails after configured max retries
- Move to DLQ topic
- Log error details
- Alert Operaciones team
- Manual review and reprocessing
Recommended Monitoring:
- Alert when DLQ size > 10 messages
- Daily DLQ review by Operaciones team
- Weekly DLQ cleanup of resolved messages
Current Behavior: Failed messages are logged and retried according to retryAttempts and retryDelay configuration in the transport strategy. After max retries, the error is logged but no DLQ is used.
9. Message Idempotency
Implementación Estado: ⚠️ Recommended Pattern - Idempotency tracking via dedicated Base de datos store is not currently implemented in the codebase. This is a recommended enhancement for production Despliegues to ensure exactly-once message processing.
Current Behavior: Messages include a messageId field in the request payload structure, but there is no centralized idempotency store to prevent duplicate processing. Application-level duplicate detection may be implemented in individual handlers.
Recommended Idempotency Implementación
Idempotency Key:
- Each message includes unique
messageId(UUID v4) - Service should store
messageIdin Base de datos before processing - Check
messageIdbefore processing; if exists, return cached response
Recommended Idempotency Store:
interface IdempotencyRecord { messageId: string; processedAt: Date; response: any; ttl: Date; // 24 hours from processedAt}Recommended Processing Flow:
async processMessage(message: MessageRequest): Promise<MessageResponse> { // Check if already processed const existing = await this.idempotencyRepo.findByMessageId(message.messageId); if (existing) { this.logger.debug(`Message ${message.messageId} already processed`); return existing.response; }
// Process message const response = await this.handleMessage(message);
// Store idempotency record await this.idempotencyRepo.create({ messageId: message.messageId, processedAt: new Date(), response, ttl: new Date(Date.now() + 24 * 60 * 60 * 1000), // 24 hours });
return response;}Recommended TTL (Time To Live):
- Idempotency records should expire after 24 hours
- MongoDB TTL index can automatically delete expired records
- Reduces storage Requisitos
10. Monitoring and Observability
Comprehensive monitoring ensures system health and quick issue resolution.
Métricas to Track
Message Throughput:
- Messages published per second (by pattern)
- Messages consumed per second (by pattern)
- Peak throughput times
Message Latency:
- p50 (median): 50th percentile latency
- p95: 95th percentile latency
- p99: 99th percentile latency
- Max latency
Error Métricas:
- Error rate (errors / total messages)
- Error rate by type (transient, permanent, timeout)
- DLQ message count
- Retry count
Infrastructure Métricas:
- Kafka lag (messages waiting in queue)
- Redis memory usage
- Connection pool utilization
- Circuit breaker state changes
Logging
Structured Logging (Winston):
logger.info("Message processed", { messageId: message.messageId, pattern: "orders.create-order", processingTime: 125, correlationId: message.metadata?.correlationId, userId: message.user?.userId,});Log Levels:
- error: Errors requiring immediate attention
- warn: Warnings (retry attempts, deprecation)
- info: Normal Operaciones (message processed)
- debug: Detailed debugging information
Correlation IDs:
- Unique ID per request flow
- Trace requests across multiple services
- Include in all logs and messages
Tracing
Distributed Tracing:
- Trace message flow across services
- Identify bottlenecks and latency issues
- Visualize service dependencies
Trace Attributes:
- Service name
- Message pattern
- Processing time
- Error details (if any)
- User information
11. Security Considerations
Security measures protect inter-service communication.
Authentication
JWT Tokens:
- Tokens issued by API Gateway
- Token validation in each Microservicio
- User context passed in message metadata
Token Structure:
{ userId: string; email: string; role: string; iat: number; // Issued at exp: number; // Expiration}Authorization
Role-Based Access Control (RBAC):
- Roles: ADMIN, AGENT, PROVIDER, CLIENT, TECHNICIAN
- Guards for Endpoint protection
- Role checks in command/query handlers
Authorization Check:
@MessagePattern('orders.delete-order')async deleteOrder(@Payload() data: any, @Ctx() context: any) { const user = context.user; if (user.role !== 'ADMIN') { throw new ForbiddenException('Only admins can delete orders'); } // Process deletion}Encryption
Transport Encryption:
- TLS for Kafka connections in production
- Redis AUTH password authentication
- MongoDB connection string encryption
Message Encryption (Sensitive Data):
- Encrypt sensitive fields (passwords, payment info)
- Use AES-256 encryption
- Store encryption keys in secure vault
Message Validation
DTO Validation:
- class-validator for input validation
- class-transformer for object transformation
- Validate all incoming messages
Validation Example:
export class CreateOrderDto { @IsString() @IsNotEmpty() service: string;
@IsEmail() @IsNotEmpty() emailContact: string;
@IsPhoneNumber("CO") phoneNumber: string;}Rate Limiting
Message Rate Limits:
- Prevent message flooding
- Per-service limits (e.g., 1000 messages/minute)
- Per-user limits (e.g., 100 messages/minute)
12. Performance Optimization
Strategies to optimize inter-service communication performance.
Batching
Message Batching:
- Group multiple messages for efficiency
- Reduce network overhead
- Example: Batch notification sends
Batch Configuration:
{ batchSize: 100, batchTimeout: 5000, // 5 seconds}Compression
Message Compression:
- Compress large payloads (> 1 KB)
- Use gzip or snappy compression
- Reduce network bandwidth
Compression Threshold:
- Small messages (< 1 KB): No compression
- Medium messages (1-10 KB): Optional compression
- Large messages (> 10 KB): Always compress
Caching
Response Caching:
- Cache frequently accessed data
- Redis cache for hot data
- TTL-based expiration
Cache Strategy:
- Read-through cache for queries
- Write-through cache for updates
- Cache invalidation on changes
Connection Pooling
Connection Pool Configuration:
{ maxConnections: 50, minConnections: 10, idleTimeout: 30000, connectionTimeout: 2000,}Benefits:
- Reuse connections
- Reduce connection overhead
- Improve throughput
Async Processing
Non-Blocking Message Handling:
- Use async/await for all I/O Operaciones
- Event loop optimization
- Prevent blocking Operaciones
13. Pruebas Inter-Service Communication
Comprehensive Pruebas ensures communication reliability.
Unit Pruebas
Mock Message Broker:
describe("OrderController", () => { let controller: OrderController; let mockClient: any;
beforeEach(() => { mockClient = { send: jest.fn().mockReturnValue(of({ success: true })), emit: jest.fn(), }; controller = new OrderController(mockClient); });
it("should send create order message", async () => { await controller.createOrder(createOrderDto); expect(mockClient.send).toHaveBeenCalledWith( "orders.create-order", expect.objectContaining({ data: createOrderDto }) ); });});Integration Pruebas
Test Containers (Redis/Kafka):
describe("Inter-Service Communication", () => { let redisContainer: StartedTestContainer; let ordersService: OrdersService;
beforeAll(async () => { redisContainer = await new GenericContainer("redis:7-alpine") .withExposedPorts(6379) .start();
ordersService = new OrdersService({ host: redisContainer.getHost(), port: redisContainer.getMappedPort(6379), }); });
it("should publish and consume order.created event", async () => { await ordersService.publishOrderCreated(order); const event = await waitForEvent("order.created"); expect(event.orderId).toBe(order.orderId); });});Contract Pruebas
Verify Message Schemas:
describe("Message Contracts", () => { it("should match CreateOrderDto schema", () => { const message = { messageId: uuid(), timestamp: new Date().toISOString(), data: createOrderDto, };
const validation = validateSync(plainToClass(MessageRequest, message)); expect(validation).toHaveLength(0); });});E2E Pruebas
Completo Flow Pruebas:
describe("Order Creation Flow", () => { it("should create order and send notification", async () => { // Create order via API Gateway const response = await request(app.getHttpServer()) .post("/orders") .send(createOrderDto) .expect(201);
// Verify order created expect(response.body.orderId).toBeDefined();
// Wait for notification event await waitForNotificationSent(response.body.orderId);
// Verify notification sent const notifications = await getNotifications(response.body.orderId); expect(notifications).toHaveLength(1); });});14. Related Documentoation
- API Gateway Arquitectura
- API Gateway Authentication
- API Gateway Resilience Patterns
- API Gateway API Reference
- Backend Microservicios Descripción General
- Orders Microservicio
- Notifications Microservicio
- Provider Microservicio
- [Base de datos Schemas](/02-architecture/Base de datos-schemas/)
15. References
Configuration Files:
- Orders MS:
algesta-ms-orders-nestjs/src/shared/config/config.transport.ts - Notifications MS:
algesta-ms-notifications-nestjs/src/shared/config/config.transport.ts - Provider MS:
algesta-ms-provider-nestjs/src/shared/config/config.transport.ts
Controllers (MessagePattern definitions):
- Orders MS:
algesta-ms-orders-nestjs/src/infrastructure/controllers/ - Notifications MS:
algesta-ms-notifications-nestjs/src/infrastructure/controllers/ - Provider MS:
algesta-ms-provider-nestjs/src/infrastructure/controllers/