Saltearse al contenido

Comunicación Entre Servicios

Comunicación Entre Servicios

1. Descripción General de Arquitectura de Comunicación

Los microservicios de Algesta se comunican usando una arquitectura event-driven con patrones de comunicación basados en mensajes. Todas las solicitudes de clientes externos se enrutan a través del API Gateway, que sirve como el único punto de entrada al sistema.

Estrategia de Comunicación:

  • Patrón: Arquitectura event-driven con decoradores MessagePattern
  • Capa de Transporte: Redis (desarrollo) y Apache Kafka (producción)
  • Protocolo: Microservicios NestJS con @nestjs/microservices
  • Formato de Mensaje: Payloads JSON con metadata e IDs de correlación
  • Estilo de Comunicación: Asíncrona (event-driven) y síncrona (request-response)

Principios de Diseño:

  • Acoplamiento Débil: Los servicios se comunican a través de mensajes, no llamadas directas
  • Procesamiento Asíncrono: Manejo de mensajes no bloqueante
  • Confiabilidad: Persistencia de mensajes y mecanismos de reintento
  • Escalabilidad: Escalamiento horizontal de consumidores de mensajes
  • Observabilidad: IDs de correlación para trazado distribuido

1.1. API Gateway as Communication Hub

The API Gateway serves as the primary communication hub between external clients and backend Microservicios. For detailed information on the gateway Arquitectura, see API Gateway Arquitectura.

Gateway-to-Microservicio Communication

  • Transport Layer: Configurable via MESSAGING_TYPE environment variable
    • Redis: Request-response pattern using Redis pub/sub
    • Kafka: Event-driven communication with consumer groups
  • Client Proxy: NestJS ClientProxy for sending messages to Microservicios
  • Message Patterns: String-based patterns for routing (e.g., ‘auth.login’, ‘orders.create’)
  • Timeout Handling: Configurable timeouts with fallback mechanisms
  • Circuit Breaker: Prevents cascading failures when Microservicios are unavailable

Configuration

The gateway registers client connections for each Microservicio in app.module.ts:

  • MS_AUTH: Authentication and user management
  • MS_ORDERS: Order lifecycle and management
  • MS_MESSAGES: Messaging and notifications
  • MS_PROVIDER: Provider management and auctions
  • MS_NOTIFICATIONS: Notification delivery

Each Microservicio connection uses the same transport configuration (Redis or Kafka) defined in config.transport.ts of the API Gateway.

Request Flow

sequenceDiagram
    participant Client
    participant Gateway as API Gateway
    participant Guard as JwtAuthGuard
    participant Controller
    participant Handler as Command/Query Handler
    participant MS as Microservice

    Client->>Gateway: HTTP Request + JWT Token
    Gateway->>Guard: Validate Token
    Guard->>MS: Validate with MS_AUTH
    MS-->>Guard: Token Valid + User Data
    Guard-->>Gateway: Attach User to Request
    Gateway->>Controller: Route to Controller
    Controller->>Handler: Execute Command/Query
    Handler->>MS: Send Message (Redis/Kafka)
    MS-->>Handler: Response
    Handler-->>Controller: Return Result
    Controller-->>Gateway: Format Response
    Gateway-->>Client: HTTP Response + TraceId

Gateway Message Patterns Used:

  • auth.login → MS_AUTH: User authentication
  • auth.validateToken → MS_AUTH: Token validation
  • orders.create-order → MS_ORDERS: Create new order
  • orders.getAll → MS_ORDERS: List all orders
  • notification.send-email → MS_NOTIFICATIONS: Send notification email
  • auction.publish → MS_PROVIDER: Publish order to auction
  • provider.get-by-id → MS_PROVIDER: Get provider details

For detailed information on gateway routing, authentication, and resilience patterns, see:

2. Message Pattern Convention

Algesta uses a consistent naming convention for message patterns to ensure clarity and discoverability.

Pattern Naming Format

Format: service.action or service.entity.action

Componentes:

  • service: Target Microservicio name (orders, notification, provider, auction, users, Documentos, services)
  • entity: Optional entity name for specific resource Operaciones
  • action: Operation to perform (create, get, update, delete, list, etc.)

Pattern Examples

Orders Microservicio:

  • orders.create-order: Create a new order
  • orders.get-by-id: Get order by ID
  • orders.service.getAll: List all orders
  • orders.update-info: Update order information
  • orders.publish-order: Publish order to auction

Notifications Microservicio:

  • notification.create-notification: Create notification
  • notification.send-order-reminder: Send order reminder email
  • quotation.approved: Send quotation approval notification

Provider Microservicio:

  • provider.create-provider: Create provider account
  • provider.get-by-identification: Get provider by ID
  • provider.initial-inspection: Submit initial inspection
  • auction.publish: Publish auction
  • auction.create-offer: Create auction bid
  • auction.assign-provider: Assign provider to order
  • users.list-providers: List all providers
  • Documentos.create: Create Documento

Reference: algesta-ms-orders-nestjs/src/shared/config, algesta-ms-notifications-nestjs/src/shared/config, algesta-ms-provider-nestjs/src/shared/config

3. Message Structure

All messages follow a standard structure for consistency and traceability.

Request Payload

interface MessageRequest<T = any> {
messageId: string; // Unique message ID for idempotency (UUID v4)
timestamp: string; // ISO 8601 timestamp of message creation
data: T; // Actual payload (DTO)
metadata?: {
ipAddress?: string; // Client IP address
userAgent?: string; // Client user agent
requestedBy?: string; // User who initiated the request
correlationId?: string; // Correlation ID for distributed tracing
};
user?: {
// Authenticated user information
userId: string;
email: string;
role: string; // ADMIN, AGENT, PROVIDER, CLIENT, TECHNICIAN
identification?: string;
name?: string;
};
}

Example Request:

{
"messageId": "f47ac10b-58cc-4372-a567-0e02b2c3d479",
"timestamp": "2025-11-19T10:30:00.000Z",
"data": {
"orderId": "ORDER-12345",
"service": "Air Conditioning Repair",
"address": "Calle 123 #45-67, Bogotá"
},
"metadata": {
"ipAddress": "192.168.1.100",
"userAgent": "Mozilla/5.0...",
"requestedBy": "admin@algesta.com",
"correlationId": "trace-abc123"
},
"user": {
"userId": "user-001",
"email": "admin@algesta.com",
"role": "ADMIN",
"name": "Admin User"
}
}

Response Payload

interface MessageResponse<T = any> {
success: boolean; // Operation success status
message: string; // Human-readable message
data?: T; // Response data (optional)
errors?: string[]; // Error messages if any (optional)
metadata?: {
processedAt?: string; // ISO 8601 timestamp of processing
processingTime?: number; // Processing time in milliseconds
};
}

Example Success Response:

{
"success": true,
"message": "Order created successfully",
"data": {
"orderId": "ORDER-12345",
"status": "NEW",
"createdAt": "2025-11-19T10:30:05.123Z"
},
"metadata": {
"processedAt": "2025-11-19T10:30:05.456Z",
"processingTime": 333
}
}

Example Error Response:

{
"success": false,
"message": "Failed to create order",
"errors": ["Invalid email address", "Phone number is required"],
"metadata": {
"processedAt": "2025-11-19T10:30:01.123Z",
"processingTime": 10
}
}

4. Transport Configuration

Algesta uses different message transport layers for development and production.

Redis Configuration (Development)

Propósito: Lightweight message broker for local development

Configuration (as implemented in config.transport.ts):

export const REDIS: RedisOptions = {
transport: Transport.REDIS,
options: {
host: process.env.REDIS_HOST || "localhost",
port: process.env.REDIS_PORT ? parseInt(process.env.REDIS_PORT, 10) : 6379,
password: process.env.REDIS_PASSWORD || undefined,
},
};

Funcionalidades:

  • Pub/Sub: Publish-subscribe messaging pattern
  • Channels: Topic-based message routing
  • Connection Pooling: Reuse connections for performance
  • Optional Authentication: Password-based auth via REDIS_PASSWORD environment variable

Redis Channels:

  • One channel per message pattern (e.g., orders.create-order, notification.send-order-reminder)

Reference: algesta-ms-orders-nestjs/src/shared/config/config.transport.ts (similar in other Microservicios)

Kafka Configuration (Production)

Propósito: Distributed message broker for production scalability

Configuration (as implemented in config.transport.ts):

export const KAFKA: KafkaOptions = {
transport: Transport.KAFKA,
options: {
client: {
clientId: process.env.KAFKA_CLIENT_ID || "ms-documents",
brokers: (process.env.KAFKA_BROKERS || "localhost:9092").split(","),
},
consumer: {
groupId: process.env.KAFKA_GROUP_ID || "ms-documents-consumer",
},
run: {
autoCommit: true,
},
},
};

Configuration Notes:

  • Client ID: Configurable via KAFKA_CLIENT_ID environment variable (defaults to service-specific Valor)
  • Brokers: Comma-separated list via KAFKA_BROKERS environment variable (defaults to localhost:9092)
  • Consumer Group: Configurable via KAFKA_GROUP_ID environment variable (defaults to service-specific consumer group)
  • Auto Commit: Enabled (autoCommit: true) for automatic offset management

Environment Variables:

  • KAFKA_CLIENT_ID: Unique client identifier for each Microservicio
  • KAFKA_BROKERS: Comma-separated Kafka broker addresses (e.g., broker1:9092,broker2:9092)
  • KAFKA_GROUP_ID: Consumer group ID for load balancing

Funcionalidades:

  • Topics: Configured per service (not explicitly defined in transport config; determined by MessagePattern routing)
  • Consumer Groups: Load balancing across service instances via consumer group ID
  • Auto Commit: Automatic offset management for message acknowledgment

Reference:

  • algesta-ms-orders-nestjs/src/shared/config/config.transport.ts
  • algesta-ms-notifications-nestjs/src/shared/config/config.transport.ts
  • algesta-ms-provider-nestjs/src/shared/config/config.transport.ts

5. Service Communication Matrix

Detailed mapping of all inter-service communications in the Algesta platform.

Orders ↔ Notifications

SourceTargetPatternPurposeTriggerFrequency
Orders MSNotifications MSnotification.send-order-reminderSend order reminder to clientOrder created or updatedOn-demand
Orders MSNotifications MSquotation.approvedNotify quote approvalQuote approved by clientEvent-driven
Orders MSNotifications MSnotification.create-notificationCreate notification recordVarious order eventsEvent-driven

Orders ↔ Provider

SourceTargetPatternPurposeTriggerFrequency
Orders MSProvider MSauction.publishPublish order to auctionOrder ready for auctionOn-demand
Provider MSOrders MSorders.update-infoUpdate order EstadoProvider assignedEvent-driven
Provider MSOrders MSorders.get-by-idGet order detailsProvider needs order infoOn-demand

Provider ↔ Notifications

SourceTargetPatternPurposeTriggerFrequency
Provider MSNotifications MSnotification.create-notificationCreate notificationAuction eventsEvent-driven
Provider MSNotifications MSnotification.send-order-reminderSend reminder to providerOrder assignmentEvent-driven

API Gateway ↔ All Services

SourceTargetPatternPurposeTriggerFrequency
API GatewayOrders MSorders.create-orderCreate new orderClient requestOn-demand
API GatewayOrders MSorders.service.getAllList ordersClient/admin requestOn-demand
API GatewayProvider MSprovider.create-providerRegister providerProvider signupOn-demand
API GatewayProvider MSauction.listList auctionsAdmin requestOn-demand
API GatewayNotifications MSnotification.find-all-notificationsList notificationsUser requestOn-demand

6. Communication Patterns

Algesta implements multiple communication patterns for different use cases.

Request-Response Pattern (Synchronous)

Use Case: Operaciones requiring immediate response

Characteristics:

  • Client waits for response
  • Timeout handling (2-10 seconds)
  • Used for CRUD Operaciones and queries

Example:

// API Gateway sends request and waits for response
@Get('/orders/:id')
async getOrder(@Param('id') orderId: string) {
const response = await this.ordersClient
.send('orders.get-by-id', { orderId })
.toPromise();
return response;
}

Flow Diagram:

sequenceDiagram
    participant Client
    participant Gateway as API Gateway
    participant Orders as Orders MS

    Client->>Gateway: GET /orders/123
    Gateway->>Orders: orders.get-by-id (MessagePattern)
    Orders->>Orders: Query database
    Orders-->>Gateway: Order data
    Gateway-->>Client: HTTP 200 + Order JSON

Event-Driven Pattern (Asynchronous)

Use Case: Fire-and-forget Operaciones, notifications, audit logs

Characteristics:

  • No waiting for response
  • Used for notifications, logging, background tasks
  • Eventual consistency

Example:

// Orders MS publishes event
this.kafkaClient.emit('order.created', {
orderId: order.orderId,
service: order.service,
client: order.emailContact,
});
// Notifications MS subscribes to event
@EventPattern('order.created')
async handleOrderCreated(@Payload() data: OrderCreatedEvent) {
await this.sendWelcomeEmail(data);
}

Flow Diagram:

sequenceDiagram
    participant Orders as Orders MS
    participant Kafka
    participant Notifications as Notifications MS
    participant SendGrid

    Orders->>Kafka: Publish order.created event
    Note over Orders: Does not wait for response
    Kafka->>Notifications: Deliver event
    Notifications->>SendGrid: Send email
    SendGrid-->>Notifications: Email sent
    Note over Notifications: Event processed independently

Saga Pattern (Distributed Transactions)

Use Case: Coordinated multi-service Operaciones with compensation

Characteristics:

  • Orchestrates multiple services
  • Compensating transactions on failure
  • Used for complex workflows (order → auction → assignment)

Example: Order Publication to Auction

Flow Diagram:

sequenceDiagram
    participant Admin
    participant Orders as Orders MS
    participant Provider as Provider MS
    participant Notifications as Notifications MS
    participant Kafka

    Admin->>Orders: Publish order to auction
    Orders->>Orders: Validate order
    Orders->>Provider: auction.publish
    Provider->>Provider: Create auction record
    Provider-->>Orders: Auction created
    Orders->>Orders: Update order status (AUCTION)
    Orders->>Kafka: Publish order.auction.published
    Kafka->>Notifications: order.auction.published
    Notifications->>Notifications: Notify eligible providers

    Note over Orders,Provider: If auction creation fails
    alt Auction creation fails
        Provider-->>Orders: Error response
        Orders->>Orders: Rollback order status
        Orders->>Notifications: Send failure notification
    end

Saga Steps:

  1. Orders MS: Update order Estado to AUCTION
  2. Provider MS: Create auction record
  3. Orders MS: Confirm auction creation
  4. Notifications MS: Notify providers
  5. Compensation: If step fails, rollback previous steps

7. Message Flow Diagrams

Completo Order Creation Flow

sequenceDiagram
    participant Client
    participant Gateway as API Gateway
    participant Orders as Orders MS
    participant MongoDB as Orders DB
    participant Kafka
    participant Notifications as Notifications MS
    participant SendGrid

    Client->>Gateway: POST /orders (REST)
    Gateway->>Orders: orders.create-order (MessagePattern)
    activate Orders
    Orders->>Orders: Validate CreateOrderDto
    Orders->>Orders: Execute CreateOrderCommand
    Orders->>MongoDB: db.orders.insertOne()
    MongoDB-->>Orders: Order document created
    Orders->>MongoDB: db.order-history.insertOne()
    Orders->>Kafka: Publish order.created event
    Orders-->>Gateway: CreateOrderResponseDto
    deactivate Orders
    Gateway-->>Client: HTTP 201 + Order JSON

    Kafka->>Notifications: order.created event
    activate Notifications
    Notifications->>Notifications: Load email template
    Notifications->>Notifications: Render with order data
    Notifications->>SendGrid: Send welcome email
    SendGrid-->>Notifications: Email sent (202)
    Notifications->>MongoDB: Save notification record
    deactivate Notifications

Auction Publication and Bidding Flow

sequenceDiagram
    participant Admin
    participant Orders as Orders MS
    participant Provider as Provider MS
    participant ProviderDB as Provider DB
    participant Kafka
    participant Notifications as Notifications MS
    participant ProviderUser as Provider

    Admin->>Orders: Publish order to auction
    Orders->>Provider: auction.publish (MessagePattern)
    activate Provider
    Provider->>Provider: Validate auction eligibility
    Provider->>ProviderDB: db.auction.insertOne()
    ProviderDB-->>Provider: Auction created
    Provider->>Kafka: Publish auction.created event
    Provider-->>Orders: {success: true, auctionId}
    deactivate Provider
    Orders->>Orders: Update order.auction = true

    Kafka->>Notifications: auction.created event
    Notifications->>Notifications: Get eligible providers
    Notifications->>SendGrid: Send auction notifications
    SendGrid->>ProviderUser: Email: New auction available

    ProviderUser->>Provider: auction.create-offer (MessagePattern)
    activate Provider
    Provider->>Provider: Validate provider eligibility
    Provider->>ProviderDB: db.auction.updateOne() - Add bid
    ProviderDB-->>Provider: Bid added
    Provider->>Kafka: Publish auction.bid.created event
    Provider-->>ProviderUser: {success: true, bidId}
    deactivate Provider

    Note over Admin,Provider: Auction closes
    Admin->>Provider: auction.assign-provider
    activate Provider
    Provider->>ProviderDB: Update bid (isAssigned=true)
    Provider->>Orders: orders.update-info (ASSIGNED)
    Provider->>Kafka: Publish provider.assigned event
    Provider-->>Admin: {success: true}
    deactivate Provider

    Kafka->>Notifications: provider.assigned event
    Notifications->>SendGrid: Notify client and provider

Provider Workflow Flow

sequenceDiagram
    participant Provider
    participant ProviderMS as Provider MS
    participant OrdersMS as Orders MS
    participant Notifications as Notifications MS
    participant Client

    Note over Provider,Client: Phase 1: Initial Inspection
    Provider->>ProviderMS: provider.initial-inspection
    ProviderMS->>ProviderMS: Save inspection data
    ProviderMS->>OrdersMS: orders.update-info (INSPECTION_COMPLETED)
    ProviderMS->>Notifications: Create notification
    Notifications->>Client: Email: Inspection completed

    Note over Provider,Client: Phase 2: Work Start
    Provider->>ProviderMS: confirm-work-start
    ProviderMS->>ProviderMS: Update work status
    ProviderMS->>OrdersMS: orders.update-info (WORK_STARTED)

    Note over Provider,Client: Phase 3: Work Completion
    Provider->>ProviderMS: work-completion-report
    ProviderMS->>ProviderMS: Save completion report
    ProviderMS->>OrdersMS: orders.update-info (WORK_COMPLETED)
    ProviderMS->>Notifications: Create notification
    Notifications->>Client: Email: Work completed

    Note over Provider,Client: Phase 4: Final Delivery
    Provider->>ProviderMS: provider.final-delivery
    ProviderMS->>ProviderMS: Save delivery data
    ProviderMS->>OrdersMS: orders.update-info (PENDING_DELIVERY)
    ProviderMS->>Notifications: Create notification
    Notifications->>Client: Email: Approval required

    Client->>OrdersMS: Approve final delivery
    OrdersMS->>OrdersMS: Update order (COMPLETED)
    OrdersMS->>Notifications: Create notification
    Notifications->>Provider: Email: Order completed

8. Error Handling and Retry Logic

Comprehensive error handling ensures message delivery reliability.

Retry Strategy

Implementación Estado: ⚠️ Partially Implemented - The current transport configuration files do not explicitly define retryAttempts or retryDelay parameters. Retry behavior depends on the default settings of the underlying NestJS Microservicios transport library (Redis/Kafka).

Current Configuration (as seen in actual transport config files):

  • Redis: No explicit retry configuration; relies on NestJS Redis transport defaults
  • Kafka: Uses autoCommit: true for automatic offset management; no explicit retry configuration

Retry Behavior:

  • Retry logic depends on the NestJS Microservicios transport library defaults
  • For Redis: NestJS may apply default retry behavior based on the redis or ioredis client
  • For Kafka: Message acknowledgment is automatic (autoCommit: true); failed messages may not be retried unless application-level retry logic is implemented

Recommended Enhancement: Add explicit retry configuration to transport options:

options: {
// Redis
retryAttempts: 5,
retryDelay: 1000,
// Or Kafka
retry: {
retries: 5,
initialRetryTime: 100,
multiplier: 2,
}
}

Reference:

Error Types

1. Transient Errors (Retry):

  • Network timeout
  • Service temporarily unavailable
  • Base de datos connection timeout
  • Rate limit exceeded

Action: Retry with exponential backoff

2. Permanent Errors (No Retry):

  • Validation errors (invalid DTO)
  • Business logic errors (order not found)
  • Authentication errors (invalid token)
  • Authorization errors (insufficient permissions)

Action: Return error to client, log, no retry

3. Timeout Errors:

  • Message processing timeout (default: 30s)
  • Base de datos query timeout
  • External API timeout

Action: Retry with circuit breaker

Circuit Breaker Pattern

Implementación Estado: ✅ Implemented - Circuit breaker functionality is available in all Microservicios via the HealthModule and can be monitored at GET /health/circuit-breaker.

States:

  • Closed (Normal): All requests processed
  • Open (Failing): Stop sending requests after threshold
  • Half-Open (Pruebas): Send single test request

Configuration (as implemented in HealthService):

  • Failure Threshold: 5 consecutive failures trigger open state
  • Recovery Timeout: 60 seconds (60000ms) in open state before transitioning to half-open
  • Monitoring Period: 5 minutes (300000ms) for failure tracking

Monitoring: Circuit breaker Estado can be checked via:

Ventana de terminal
GET /health/circuit-breaker

Example Response:

{
"service": "ms-orders",
"circuitBreaker": {
"state": "CLOSED",
"failureCount": 0,
"lastFailureTime": null,
"nextAttemptTime": null,
"options": {
"failureThreshold": 5,
"recoveryTimeout": 60000,
"monitoringPeriod": 300000
}
},
"timestamp": "2025-11-19T10:30:00.000Z"
}

Reference: algesta-ms-orders-nestjs/src/shared/health/health.service.ts and similar files in other Microservicios.

Dead Letter Queue (DLQ)

Implementación Estado: ⚠️ Recommended Pattern - DLQ topics are not currently configured in the codebase. This is a recommended future enhancement for production Despliegues.

Recommended DLQ Configuration (for future Implementación):

DLQ Topics:

  • algesta-orders-dlq: Failed Orders MS messages
  • algesta-notifications-dlq: Failed Notifications MS messages
  • algesta-provider-dlq: Failed Provider MS messages

Recommended DLQ Processing:

  1. Message fails after configured max retries
  2. Move to DLQ topic
  3. Log error details
  4. Alert Operaciones team
  5. Manual review and reprocessing

Recommended Monitoring:

  • Alert when DLQ size > 10 messages
  • Daily DLQ review by Operaciones team
  • Weekly DLQ cleanup of resolved messages

Current Behavior: Failed messages are logged and retried according to retryAttempts and retryDelay configuration in the transport strategy. After max retries, the error is logged but no DLQ is used.

9. Message Idempotency

Implementación Estado: ⚠️ Recommended Pattern - Idempotency tracking via dedicated Base de datos store is not currently implemented in the codebase. This is a recommended enhancement for production Despliegues to ensure exactly-once message processing.

Current Behavior: Messages include a messageId field in the request payload structure, but there is no centralized idempotency store to prevent duplicate processing. Application-level duplicate detection may be implemented in individual handlers.

Idempotency Key:

  • Each message includes unique messageId (UUID v4)
  • Service should store messageId in Base de datos before processing
  • Check messageId before processing; if exists, return cached response

Recommended Idempotency Store:

interface IdempotencyRecord {
messageId: string;
processedAt: Date;
response: any;
ttl: Date; // 24 hours from processedAt
}

Recommended Processing Flow:

async processMessage(message: MessageRequest): Promise<MessageResponse> {
// Check if already processed
const existing = await this.idempotencyRepo.findByMessageId(message.messageId);
if (existing) {
this.logger.debug(`Message ${message.messageId} already processed`);
return existing.response;
}
// Process message
const response = await this.handleMessage(message);
// Store idempotency record
await this.idempotencyRepo.create({
messageId: message.messageId,
processedAt: new Date(),
response,
ttl: new Date(Date.now() + 24 * 60 * 60 * 1000), // 24 hours
});
return response;
}

Recommended TTL (Time To Live):

  • Idempotency records should expire after 24 hours
  • MongoDB TTL index can automatically delete expired records
  • Reduces storage Requisitos

10. Monitoring and Observability

Comprehensive monitoring ensures system health and quick issue resolution.

Métricas to Track

Message Throughput:

  • Messages published per second (by pattern)
  • Messages consumed per second (by pattern)
  • Peak throughput times

Message Latency:

  • p50 (median): 50th percentile latency
  • p95: 95th percentile latency
  • p99: 99th percentile latency
  • Max latency

Error Métricas:

  • Error rate (errors / total messages)
  • Error rate by type (transient, permanent, timeout)
  • DLQ message count
  • Retry count

Infrastructure Métricas:

  • Kafka lag (messages waiting in queue)
  • Redis memory usage
  • Connection pool utilization
  • Circuit breaker state changes

Logging

Structured Logging (Winston):

logger.info("Message processed", {
messageId: message.messageId,
pattern: "orders.create-order",
processingTime: 125,
correlationId: message.metadata?.correlationId,
userId: message.user?.userId,
});

Log Levels:

  • error: Errors requiring immediate attention
  • warn: Warnings (retry attempts, deprecation)
  • info: Normal Operaciones (message processed)
  • debug: Detailed debugging information

Correlation IDs:

  • Unique ID per request flow
  • Trace requests across multiple services
  • Include in all logs and messages

Tracing

Distributed Tracing:

  • Trace message flow across services
  • Identify bottlenecks and latency issues
  • Visualize service dependencies

Trace Attributes:

  • Service name
  • Message pattern
  • Processing time
  • Error details (if any)
  • User information

11. Security Considerations

Security measures protect inter-service communication.

Authentication

JWT Tokens:

  • Tokens issued by API Gateway
  • Token validation in each Microservicio
  • User context passed in message metadata

Token Structure:

{
userId: string;
email: string;
role: string;
iat: number; // Issued at
exp: number; // Expiration
}

Authorization

Role-Based Access Control (RBAC):

  • Roles: ADMIN, AGENT, PROVIDER, CLIENT, TECHNICIAN
  • Guards for Endpoint protection
  • Role checks in command/query handlers

Authorization Check:

@MessagePattern('orders.delete-order')
async deleteOrder(@Payload() data: any, @Ctx() context: any) {
const user = context.user;
if (user.role !== 'ADMIN') {
throw new ForbiddenException('Only admins can delete orders');
}
// Process deletion
}

Encryption

Transport Encryption:

  • TLS for Kafka connections in production
  • Redis AUTH password authentication
  • MongoDB connection string encryption

Message Encryption (Sensitive Data):

  • Encrypt sensitive fields (passwords, payment info)
  • Use AES-256 encryption
  • Store encryption keys in secure vault

Message Validation

DTO Validation:

  • class-validator for input validation
  • class-transformer for object transformation
  • Validate all incoming messages

Validation Example:

export class CreateOrderDto {
@IsString()
@IsNotEmpty()
service: string;
@IsEmail()
@IsNotEmpty()
emailContact: string;
@IsPhoneNumber("CO")
phoneNumber: string;
}

Rate Limiting

Message Rate Limits:

  • Prevent message flooding
  • Per-service limits (e.g., 1000 messages/minute)
  • Per-user limits (e.g., 100 messages/minute)

12. Performance Optimization

Strategies to optimize inter-service communication performance.

Batching

Message Batching:

  • Group multiple messages for efficiency
  • Reduce network overhead
  • Example: Batch notification sends

Batch Configuration:

{
batchSize: 100,
batchTimeout: 5000, // 5 seconds
}

Compression

Message Compression:

  • Compress large payloads (> 1 KB)
  • Use gzip or snappy compression
  • Reduce network bandwidth

Compression Threshold:

  • Small messages (< 1 KB): No compression
  • Medium messages (1-10 KB): Optional compression
  • Large messages (> 10 KB): Always compress

Caching

Response Caching:

  • Cache frequently accessed data
  • Redis cache for hot data
  • TTL-based expiration

Cache Strategy:

  • Read-through cache for queries
  • Write-through cache for updates
  • Cache invalidation on changes

Connection Pooling

Connection Pool Configuration:

{
maxConnections: 50,
minConnections: 10,
idleTimeout: 30000,
connectionTimeout: 2000,
}

Benefits:

  • Reuse connections
  • Reduce connection overhead
  • Improve throughput

Async Processing

Non-Blocking Message Handling:

  • Use async/await for all I/O Operaciones
  • Event loop optimization
  • Prevent blocking Operaciones

13. Pruebas Inter-Service Communication

Comprehensive Pruebas ensures communication reliability.

Unit Pruebas

Mock Message Broker:

describe("OrderController", () => {
let controller: OrderController;
let mockClient: any;
beforeEach(() => {
mockClient = {
send: jest.fn().mockReturnValue(of({ success: true })),
emit: jest.fn(),
};
controller = new OrderController(mockClient);
});
it("should send create order message", async () => {
await controller.createOrder(createOrderDto);
expect(mockClient.send).toHaveBeenCalledWith(
"orders.create-order",
expect.objectContaining({ data: createOrderDto })
);
});
});

Integration Pruebas

Test Containers (Redis/Kafka):

describe("Inter-Service Communication", () => {
let redisContainer: StartedTestContainer;
let ordersService: OrdersService;
beforeAll(async () => {
redisContainer = await new GenericContainer("redis:7-alpine")
.withExposedPorts(6379)
.start();
ordersService = new OrdersService({
host: redisContainer.getHost(),
port: redisContainer.getMappedPort(6379),
});
});
it("should publish and consume order.created event", async () => {
await ordersService.publishOrderCreated(order);
const event = await waitForEvent("order.created");
expect(event.orderId).toBe(order.orderId);
});
});

Contract Pruebas

Verify Message Schemas:

describe("Message Contracts", () => {
it("should match CreateOrderDto schema", () => {
const message = {
messageId: uuid(),
timestamp: new Date().toISOString(),
data: createOrderDto,
};
const validation = validateSync(plainToClass(MessageRequest, message));
expect(validation).toHaveLength(0);
});
});

E2E Pruebas

Completo Flow Pruebas:

describe("Order Creation Flow", () => {
it("should create order and send notification", async () => {
// Create order via API Gateway
const response = await request(app.getHttpServer())
.post("/orders")
.send(createOrderDto)
.expect(201);
// Verify order created
expect(response.body.orderId).toBeDefined();
// Wait for notification event
await waitForNotificationSent(response.body.orderId);
// Verify notification sent
const notifications = await getNotifications(response.body.orderId);
expect(notifications).toHaveLength(1);
});
});

15. References

Configuration Files:

  • Orders MS: algesta-ms-orders-nestjs/src/shared/config/config.transport.ts
  • Notifications MS: algesta-ms-notifications-nestjs/src/shared/config/config.transport.ts
  • Provider MS: algesta-ms-provider-nestjs/src/shared/config/config.transport.ts

Controllers (MessagePattern definitions):

  • Orders MS: algesta-ms-orders-nestjs/src/infrastructure/controllers/
  • Notifications MS: algesta-ms-notifications-nestjs/src/infrastructure/controllers/
  • Provider MS: algesta-ms-provider-nestjs/src/infrastructure/controllers/