Back to Blog

Proxies for Microservices Architecture: API Protection, Load Balancing, and Security

A complete guide to integrating proxies into microservice architecture: API protection, load balancing, communication security between services, and configuration examples.

📅February 18, 2026
```html

Microservice architecture requires reliable communication between services, protection of external API requests, and load balancing. Proxy servers address these challenges by acting as intermediaries between services, external APIs, and clients. In this guide, we will explore how to properly integrate proxies into microservice infrastructure, what types of proxies to use for different scenarios, and how to set up secure communication.

The Role of Proxy in Microservice Architecture

In microservice architecture, proxy servers perform several critically important functions that differ from traditional uses of proxies for anonymization or bypassing restrictions. Here, proxies become an integral part of the infrastructure, ensuring reliable and secure communication between system components.

Main Roles of Proxies in Microservices:

  • API Gateway — a single entry point for all client requests that routes them to the appropriate microservices, hiding the internal architecture of the system
  • Sidecar Proxy — a proxy container that runs alongside each service (Service Mesh pattern), intercepting all incoming and outgoing traffic
  • Reverse Proxy — distributing load among multiple instances of a single service, ensuring fault tolerance
  • Forward Proxy — controlling and protecting outgoing requests to external APIs, hiding internal IP addresses of the infrastructure
  • Security Proxy — SSL/TLS termination, authentication, authorization, protection against DDoS and other attacks

Proxies enable the implementation of important architectural patterns: circuit breaker (automatic disconnection of non-working services), retry logic (retries on failures), rate limiting (request frequency limitation), request/response transformation (data format conversion). All of this makes the system more resilient to failures and simplifies the management of complex distributed infrastructure.

Important: In microservice architecture, proxies operate at two levels — as an external gateway for clients (API Gateway) and as internal proxies between services (Service Mesh). Both levels are critically important for the security and reliability of the system.

Types of Proxies for Different Use Cases

The choice of proxy type depends on the specific task in microservice architecture. Different scenarios require different characteristics: speed, reliability, anonymity, or geographical distribution.

Scenario Proxy Type Why
Internal communication between services HTTP/HTTPS proxies (Envoy, NGINX) Maximum speed, low latency, support for HTTP/2
Requests to external APIs with limits Residential proxies Bypassing rate limits, real user IPs, low risk of blocking
Data parsing for analytics Datacenter proxies High speed, low cost, suitable for bulk requests
Working with mobile APIs Mobile proxies Imitating real mobile users, access to mobile-only APIs
Load balancing Reverse Proxy (HAProxy, NGINX) Traffic distribution, health checks, automatic failover
Geographically distributed system Residential proxies with geo-targeting Access to regional APIs, compliance with data localization requirements

For internal communication between microservices, specialized proxy solutions like Envoy Proxy or NGINX are typically used, which are optimized for low latency and high throughput. They support modern protocols (HTTP/2, gRPC) and integrate with Service Mesh systems.

When working with external APIs, the choice depends on the specific service requirements. If an API has strict rate limits or blocks requests from datacenter IPs, residential proxies are necessary. For bulk data collection, where speed is more important than anonymity, datacenter proxies are suitable. Mobile proxies are required when working with APIs that check device type or require mobile IP addresses.

Proxy as API Gateway: Protection and Routing

An API Gateway is a specialized proxy server that serves as a single entry point for all client requests to the microservice system. Instead of clients directly accessing dozens of different services, they send all requests to one address, the API Gateway, which routes them to the necessary services.

Main Functions of API Gateway:

  • Request Routing — determining which microservice should handle the request based on URL, headers, or other parameters
  • Authentication and Authorization — verifying tokens (JWT, OAuth), managing access to different services
  • Rate Limiting — limiting the number of requests from a single client to protect against overload and DDoS
  • Response Aggregation — combining data from several services into a single response for the client
  • Protocol Transformation — converting REST to gRPC, HTTP/1.1 to HTTP/2
  • Caching — storing frequently requested data to reduce load on services
  • Logging and Monitoring — centralized collection of metrics and logs for all requests

Popular solutions for API Gateway include Kong, Tyk, AWS API Gateway, Azure API Management, NGINX Plus, and Traefik. The choice depends on the scale of the system, performance requirements, and the cloud platform used.

// Example NGINX configuration as API Gateway
upstream auth_service {
    server auth:8001;
}

upstream user_service {
    server user:8002;
}

upstream order_service {
    server order:8003;
}

server {
    listen 80;
    server_name api.example.com;

    # Rate limiting
    limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;

    location /api/auth/ {
        limit_req zone=api_limit burst=20;
        proxy_pass http://auth_service/;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }

    location /api/users/ {
        # Token verification before proxying
        auth_request /auth/verify;
        proxy_pass http://user_service/;
    }

    location /api/orders/ {
        auth_request /auth/verify;
        proxy_pass http://order_service/;
    }

    # Internal endpoint for token verification
    location = /auth/verify {
        internal;
        proxy_pass http://auth_service/verify;
        proxy_pass_request_body off;
        proxy_set_header Content-Length "";
    }
}

The API Gateway hides the internal architecture of the system from external clients. Clients do not know how many microservices exist and how they interact — they see only a single API. This simplifies versioning, allows changes to the internal structure without affecting clients, and improves security, as internal services are not directly accessible from the internet.

Integration with Service Mesh (Istio, Linkerd)

A Service Mesh is an infrastructure layer that manages communication between microservices using proxy servers deployed alongside each service (Sidecar pattern). Unlike an API Gateway, which only handles external traffic, a Service Mesh controls all internal traffic between services.

The most popular Service Mesh solutions are Istio (which uses Envoy Proxy as a sidecar) and Linkerd (which uses its own lightweight proxy). They automatically inject a proxy container next to each pod in Kubernetes, intercepting all incoming and outgoing traffic.

Service Mesh Capabilities via Proxies:

  • Mutual TLS (mTLS) — automatic encryption of all traffic between services with mutual authentication
  • Traffic Management — controlling routing, canary deployments, A/B testing
  • Observability — automatic collection of metrics, traces, and logs without modifying service code
  • Resilience — circuit breaking, retry logic, timeout management, fault injection for testing
  • Service Discovery — automatic service discovery and load balancing
# Example Istio VirtualService configuration for routing
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: user-service
spec:
  hosts:
  - user-service
  http:
  - match:
    - headers:
        version:
          exact: "v2"
    route:
    - destination:
        host: user-service
        subset: v2
  - route:
    - destination:
        host: user-service
        subset: v1
      weight: 90
    - destination:
        host: user-service
        subset: v2
      weight: 10  # Canary deployment: 10% of traffic to v2

---
# Circuit Breaker to protect against cascading failures
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: user-service
spec:
  host: user-service
  trafficPolicy:
    connectionPool:
      tcp:
        maxConnections: 100
      http:
        http1MaxPendingRequests: 50
        maxRequestsPerConnection: 2
    outlierDetection:
      consecutiveErrors: 5
      interval: 30s
      baseEjectionTime: 30s
      maxEjectionPercent: 50

The Service Mesh addresses the problem of "distributed monolith" — where the logic of interaction between services (retry, timeout, circuit breaking) is duplicated in the code of each service. Instead, all this logic is moved to the proxy layer, simplifying the code of services and ensuring consistent behavior across the entire system.

An important advantage is complete traffic transparency. Each request between services passes through a proxy that logs metrics: response time, error codes, payload size. This data is automatically sent to monitoring systems (Prometheus, Grafana) and tracing systems (Jaeger, Zipkin), creating a complete picture of the distributed system's operation without the need to add instrumentation to the code of each service.

Protecting Requests to External APIs via Proxy

Microservices often interact with external APIs: payment systems, geolocation services, social media APIs, data providers. Direct requests to external APIs create several problems: exposure of internal IP addresses of the infrastructure, risk of blocking when exceeding rate limits, lack of control over outgoing traffic.

Using proxies for outgoing requests solves these problems and adds additional capabilities:

  • Hiding Infrastructure — external APIs see the proxy IP addresses, not your servers
  • Bypassing Rate Limits — rotating IP addresses to distribute requests
  • Geographical Distribution — access to regional APIs through proxies in required countries
  • Centralized Management — a single point of control for all outgoing requests
  • Caching Responses — reducing the number of requests to expensive APIs
  • Monitoring and Logging — tracking all requests to external services
// Python: setting up a proxy for requests to external APIs
import requests
from requests.adapters import HTTPAdapter
from urllib3.util.retry import Retry

class ExternalAPIClient:
    def __init__(self, proxy_url, proxy_rotation=False):
        self.session = requests.Session()
        
        # Setting up the proxy
        self.proxies = {
            'http': proxy_url,
            'https': proxy_url
        }
        
        # Retry logic for resilience
        retry_strategy = Retry(
            total=3,
            backoff_factor=1,
            status_forcelist=[429, 500, 502, 503, 504],
            allowed_methods=["GET", "POST"]
        )
        adapter = HTTPAdapter(max_retries=retry_strategy)
        self.session.mount("http://", adapter)
        self.session.mount("https://", adapter)
    
    def call_payment_api(self, data):
        """Request to payment API via proxy"""
        try:
            response = self.session.post(
                'https://api.payment-provider.com/charge',
                json=data,
                proxies=self.proxies,
                timeout=10,
                headers={'User-Agent': 'MyService/1.0'}
            )
            response.raise_for_status()
            return response.json()
        except requests.exceptions.RequestException as e:
            # Logging the error
            print(f"Payment API error: {e}")
            raise

# Usage with a proxy pool for rotation
class ProxyPool:
    def __init__(self, proxy_list):
        self.proxies = proxy_list
        self.current = 0
    
    def get_next(self):
        proxy = self.proxies[self.current]
        self.current = (self.current + 1) % len(self.proxies)
        return proxy

# Initialization
proxy_pool = ProxyPool([
    'http://user:pass@proxy1.example.com:8080',
    'http://user:pass@proxy2.example.com:8080',
    'http://user:pass@proxy3.example.com:8080'
])

# For each request, use the next proxy
client = ExternalAPIClient(proxy_pool.get_next())

For working with external APIs that have strict limits or block requests from datacenter IPs, residential proxies become a necessity. They provide real IP addresses of home users, which reduces the risk of blocking and allows bypassing geographical restrictions.

// Node.js: proxy for external APIs with automatic rotation
const axios = require('axios');
const HttpsProxyAgent = require('https-proxy-agent');

class ExternalAPIService {
  constructor(proxyList) {
    this.proxyList = proxyList;
    this.currentProxyIndex = 0;
    this.requestCounts = new Map(); // Request counter for rate limiting
  }

  getNextProxy() {
    const proxy = this.proxyList[this.currentProxyIndex];
    this.currentProxyIndex = (this.currentProxyIndex + 1) % this.proxyList.length;
    return proxy;
  }

  async callAPI(endpoint, data, options = {}) {
    const proxyUrl = this.getNextProxy();
    const agent = new HttpsProxyAgent(proxyUrl);

    // Rate limiting: no more than 100 requests per minute per proxy
    const proxyKey = proxyUrl;
    const now = Date.now();
    const count = this.requestCounts.get(proxyKey) || { count: 0, resetTime: now + 60000 };
    
    if (count.count >= 100 && now < count.resetTime) {
      // Switch to the next proxy
      return this.callAPI(endpoint, data, options);
    }

    try {
      const response = await axios({
        method: options.method || 'POST',
        url: endpoint,
        data: data,
        httpsAgent: agent,
        timeout: options.timeout || 10000,
        headers: {
          'User-Agent': 'Mozilla/5.0 (compatible; MyService/1.0)',
          ...options.headers
        }
      });

      // Update the counter
      if (now >= count.resetTime) {
        this.requestCounts.set(proxyKey, { count: 1, resetTime: now + 60000 });
      } else {
        count.count++;
      }

      return response.data;
    } catch (error) {
      if (error.response?.status === 429) {
        // Rate limit exceeded - switch to another proxy
        console.log(`Rate limit on ${proxyUrl}, switching proxy`);
        return this.callAPI(endpoint, data, options);
      }
      throw error;
    }
  }
}

// Usage
const apiService = new ExternalAPIService([
  'http://user:pass@proxy1.example.com:8080',
  'http://user:pass@proxy2.example.com:8080'
]);

module.exports = apiService;

Load Balancing and Fault Tolerance

Proxy servers play a key role in ensuring high availability of the microservice system through load balancing and automatic failover. When you have multiple instances of a single service running (for horizontal scaling), the proxy distributes requests among them, ensuring an even load.

Main Load Balancing Algorithms:

  • Round Robin — sequentially sending requests to each server in the list, simple and effective for homogeneous servers
  • Least Connections — sending the request to the server with the fewest active connections, suitable for long requests
  • IP Hash — binding the client to a specific server based on their IP, ensuring sticky sessions
  • Weighted Round Robin — distribution considering server capacity (more powerful servers receive more requests)
  • Random — randomly selecting a server, suitable for stateless services
# HAProxy configuration for load balancing with health checks
global
    maxconn 4096
    log stdout format raw local0

defaults
    mode http
    timeout connect 5s
    timeout client 50s
    timeout server 50s
    option httplog

frontend api_frontend
    bind *:80
    default_backend api_servers

backend api_servers
    balance roundrobin
    
    # Health check: checking /health every 2 seconds
    option httpchk GET /health
    http-check expect status 200
    
    # Retry logic
    retries 3
    option redispatch
    
    # Servers with weights (server3 is twice as powerful)
    server server1 10.0.1.10:8080 check weight 1 maxconn 500
    server server2 10.0.1.11:8080 check weight 1 maxconn 500
    server server3 10.0.1.12:8080 check weight 2 maxconn 1000
    
    # Backup server (used only if the main ones are unavailable)
    server backup1 10.0.2.10:8080 check backup

Health checks are a critically important function for fault tolerance. Proxies regularly check the availability of each server (usually through the HTTP endpoint /health or /ready) and automatically exclude non-working servers from the load balancing pool. When a server recovers and starts responding to health checks, it automatically returns to the pool.

Fault Tolerance Strategies via Proxy:

  • Active Health Checks — the proxy actively polls servers to check their availability
  • Passive Health Checks — the proxy monitors real requests and excludes servers when errors accumulate
  • Circuit Breaker — temporarily disabling a problematic service to prevent cascading failures
  • Graceful Degradation — switching to a simplified mode of operation or cached data during failures
  • Failover to Backup — automatic switching to backup servers or regions
// Python: implementing Circuit Breaker for proxy to external services
from datetime import datetime, timedelta
from enum import Enum

class CircuitState(Enum):
    CLOSED = "closed"      # Normal operation
    OPEN = "open"          # Service unavailable, requests are blocked
    HALF_OPEN = "half_open"  # Test mode after recovery

class CircuitBreaker:
    def __init__(self, failure_threshold=5, timeout=60, success_threshold=2):
        self.failure_threshold = failure_threshold  # Errors before opening
        self.timeout = timeout  # Seconds before recovery attempts
        self.success_threshold = success_threshold  # Successes to close
        
        self.state = CircuitState.CLOSED
        self.failures = 0
        self.successes = 0
        self.last_failure_time = None
    
    def call(self, func, *args, **kwargs):
        if self.state == CircuitState.OPEN:
            if datetime.now() - self.last_failure_time > timedelta(seconds=self.timeout):
                self.state = CircuitState.HALF_OPEN
                print("Circuit breaker: transitioning to HALF_OPEN")
            else:
                raise Exception("Circuit breaker OPEN: service unavailable")
        
        try:
            result = func(*args, **kwargs)
            self._on_success()
            return result
        except Exception as e:
            self._on_failure()
            raise e
    
    def _on_success(self):
        self.failures = 0
        if self.state == CircuitState.HALF_OPEN:
            self.successes += 1
            if self.successes >= self.success_threshold:
                self.state = CircuitState.CLOSED
                self.successes = 0
                print("Circuit breaker: recovery, transitioning to CLOSED")
    
    def _on_failure(self):
        self.failures += 1
        self.last_failure_time = datetime.now()
        if self.failures >= self.failure_threshold:
            self.state = CircuitState.OPEN
            print(f"Circuit breaker OPEN: {self.failures} consecutive errors")

# Usage
breaker = CircuitBreaker(failure_threshold=3, timeout=30)

def call_external_service():
    # Your code for the request to the external API via proxy
    pass

try:
    result = breaker.call(call_external_service)
except Exception as e:
    # Fallback logic: cache, default values, etc.
    print(f"Service unavailable: {e}")

Security of Communication Between Services

In microservice architecture, proxy servers provide several levels of security: traffic encryption, service authentication, protection against attacks, and isolation of network segments. Without proper security configuration, internal traffic between services can be intercepted or spoofed.

Key Security Aspects via Proxy:

  • Mutual TLS (mTLS) — two-way authentication, where both client and server verify each other's certificates. Service Mesh automatically configures mTLS between all services
  • TLS Termination — the proxy decrypts HTTPS traffic at the boundary, verifies it, and forwards it to services over a secure channel
  • JWT Validation — verifying access tokens at the proxy level, before the request reaches the service
  • IP Whitelisting — restricting access to services only from allowed IP addresses
  • DDoS Protection — rate limiting, connection limits, protection against SYN flood at the proxy level
  • WAF (Web Application Firewall) — filtering malicious requests, protection against SQL injection, XSS
# NGINX configuration with SSL/TLS and security
server {
    listen 443 ssl http2;
    server_name api.internal.example.com;

    # SSL certificates
    ssl_certificate /etc/nginx/certs/api.crt;
    ssl_certificate_key /etc/nginx/certs/api.key;
    
    # Modern protocols and ciphers
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers HIGH:!aNULL:!MD5;
    ssl_prefer_server_ciphers on;
    
    # Client certificate for mTLS
    ssl_client_certificate /etc/nginx/certs/ca.crt;
    ssl_verify_client on;
    
    # Secure headers
    add_header Strict-Transport-Security "max-age=31536000" always;
    add_header X-Frame-Options "DENY" always;
    add_header X-Content-Type-Options "nosniff" always;
    
    # Rate limiting
    limit_req_zone $binary_remote_addr zone=api:10m rate=100r/s;
    limit_req zone=api burst=200 nodelay;
    
    # Connection limiting
    limit_conn_zone $binary_remote_addr zone=addr:10m;
    limit_conn addr 10;
    
    # IP whitelisting
    allow 10.0.0.0/8;      # Internal network
    allow 172.16.0.0/12;   # VPC
    deny all;
    
    location / {
        # JWT token verification
        auth_jwt "Restricted API";
        auth_jwt_key_file /etc/nginx/jwt_key.json;
        
        proxy_pass http://backend_service;
        
        # Passing client certificate information
        proxy_set_header X-Client-DN $ssl_client_s_dn;
        proxy_set_header X-Client-Verify $ssl_client_verify;
    }
}

Service Mesh significantly simplifies security configuration by automatically generating and rotating certificates for mTLS, applying access policies, and encrypting all traffic between services. For example, in Istio, you can specify a policy that the "payment" service can only accept requests from the "order" service, and this will be automatically enforced at the proxy level without changing the service code.

Important for production: Always use mTLS for internal communication between services, even if they are on the same private network. This protects against man-in-the-middle attacks and ensures authentication at the service level, not just at the network level.

Monitoring and Logging Proxy Traffic

Proxy servers provide a unique opportunity for centralized monitoring of all traffic in a microservice system. Since all traffic passes through the proxy (both external via API Gateway and internal via Service Mesh), you gain complete visibility into the system's operation without the need to instrument each service.

Key Metrics for Monitoring at the Proxy Level:

  • Latency — processing time of the request at each stage: proxy, service, external APIs
  • Throughput — number of requests per second, volume of data transmitted
  • Error Rate — percentage of errors (4xx, 5xx), types of errors, problematic endpoints
  • Connection Metrics — number of active connections, connection pool usage
  • Circuit Breaker State — state of circuit breakers for each service
  • SSL/TLS Metrics — status of certificates, protocol versions, handshake errors
# NGINX configuration for exporting metrics to Prometheus
server {
    listen 9113;
    location /metrics {
        stub_status;
        access_log off;
        allow 10.0.0.0/8;  # Only for Prometheus server
        deny all;
    }
}

# Logging in JSON format for structured logging
log_format json_combined escape=json
  '{'
    '"time_local":"$time_local",'
    '"remote_addr":"$remote_addr",'
    '"request":"$request",'
    '"status": "$status",'
    '"body_bytes_sent":"$body_bytes_sent",'
    '"request_time":"$request_time",'
    '"upstream_response_time":"$upstream_response_time",'
    '"upstream_addr":"$upstream_addr",'
    '"http_referrer":"$http_referer",'
    '"http_user_agent":"$http_user_agent",'
    '"http_x_forwarded_for":"$http_x_forwarded_for"'
  '}';

access_log /var/log/nginx/access.log json_combined;

Distributed Tracing is one of the most powerful monitoring capabilities via proxies. Each request receives a unique trace ID, which the proxy adds to the headers and passes along the chain of services. Tracing systems (Jaeger, Zipkin) collect information from all proxies and build a complete path of the request through the system, showing how much time it spent in each service.

// Node.js: adding tracing to proxy middleware
const express = require('express');
const { v4: uuidv4 } = require('uuid');
const axios = require('axios');

const app = express();

// Middleware for adding trace ID
app.use((req, res, next) => {
  // Get trace ID from header or create a new one
  const traceId = req.headers['x-trace-id'] || uuidv4();
  const spanId = uuidv4();
  
  // Add to headers for passing further
  req.traceId = traceId;
  req.spanId = spanId;
  res.setHeader('x-trace-id', traceId);
  
  // Log the start of processing
  const startTime = Date.now();
  
  res.on('finish', () => {
    const duration = Date.now() - startTime;
    
    // Structured log for analysis
    console.log(JSON.stringify({
      timestamp: new Date().toISOString(),
      traceId: traceId,
      spanId: spanId,
      method: req.method,
      path: req.path,
      status: res.statusCode,
      duration: duration,
      userAgent: req.headers['user-agent'],
      ip: req.ip
    }));
  });
  
  next();
});

// Proxy endpoint with passing tracing headers
app.all('/api/*', async (req, res) => {
  const targetService = determineTargetService(req.path);
  
  try {
    const response = await axios({
      method: req.method,
      url: `http://${targetService}${req.path}`,
      data: req.body,
      headers: {
        ...req.headers,
        'x-trace-id': req.traceId,
        'x-parent-span-id': req.spanId,
        'x-span-id': uuidv4()  // New span for downstream request
      }
    });
    
    res.status(response.status).json(response.data);
  } catch (error) {
    console.error(JSON.stringify({
      traceId: req.traceId,
      error: error.message,
      service: targetService
    }));
    res.status(500).json({ error: 'Service unavailable' });
  }
});

function determineTargetService(path) {
  if (path.startsWith('/api/users')) return 'user-service:8080';
  if (path.startsWith('/api/orders')) return 'order-service:8080';
  return 'default-service:8080';
}

app.listen(3000);

Alerting based on proxy metrics allows for quick detection of issues. For example, alerts can be set up for: sudden increases in latency (possibly one of the services is degrading), an increase in error rate above a threshold (issues with code or dependencies), changes in traffic patterns (possible DDoS attack or viral load).

Implementation Examples in Python and Node.js

Let's consider practical examples of integrating proxies into microservices in Python and Node.js for different scenarios: internal communication, working with external APIs, load balancing.

Python: Service with Proxy for External APIs

from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
import httpx
import asyncio
from typing import List, Optional
import logging

app = FastAPI()
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)

class ProxyConfig(BaseModel):
    url: str
    max_requests_per_minute: int = 60

class ProxyPool:
    def __init__(self, proxies: List[ProxyConfig]):
        self.proxies = proxies
        self.current_index = 0
        self.request_counts = {p.url: 0 for p in proxies}
        self.reset_time = asyncio.get_event_loop().time() + 60
    
    async def get_next_proxy(self) -> str:
        # Reset counters every minute
        current_time = asyncio.get_event_loop().time()
        if current_time >= self.reset_time:
            self.request_counts = {p.url: 0 for p in self.proxies}
            self.reset_time = current_time + 60
        
        # Find a proxy with available requests
        for _ in range(len(self.proxies)):
            proxy = self.proxies[self.current_index]
            self.current_index = (self.current_index + 1) % len(self.proxies)
            
            if self.request_counts[proxy.url] < proxy.max_requests_per_minute:
                self.request_counts[proxy.url] += 1
                return proxy.url
        
        # All proxies have exhausted their limit
        raise HTTPException(status_code=429, detail="All proxies rate limited")

# Initializing the proxy pool
proxy_pool = ProxyPool([
    ProxyConfig(url="http://user:pass@proxy1.example.com:8080", max_requests_per_minute=100),
    ProxyConfig(url="http://user:pass@proxy2.example.com:8080", max_requests_per_minute=100),
    ProxyConfig(url="http://user:pass@proxy3.example.com:8080", max_requests_per_minute=100)
])

class ExternalAPIClient:
    def __init__(self, proxy_pool: ProxyPool):
        self.proxy_pool = proxy_pool
    
    async def fetch_data(self, endpoint: str, params: dict = None) -> dict:
        proxy_url = await self.proxy_pool.get_next_proxy()
        
        async with httpx.AsyncClient(proxies={"http://": proxy_url, "https://": proxy_url}) as client:
            try:
                response = await client.get(
                    endpoint,
                    params=params,
                    timeout=10.0,
                    headers={"User-Agent": "MyMicroservice/1.0"}
                )
                response.raise_for_status()
                
                logger.info(f"Successfully fetched from {endpoint} via {proxy_url}")
                return response.json()
            
            except httpx.HTTPStatusError as e:
                logger.error(f"HTTP error {e.response.status_code} from {endpoint}")
                raise HTTPException(status_code=e.response.status_code, detail=str(e))
            
            except httpx.RequestError as e:
                logger.error(f"Request error to {endpoint}: {e}")
                raise HTTPException(status_code=503, detail="External API unavailable")

api_client = ExternalAPIClient(proxy_pool)

@app.get("/data/{resource_id}")
async def get_external_data(resource_id: str):
    """Endpoint that fetches data from external API via proxy"""
    external_endpoint = f"https://api.external-service.com/v1/resources/{resource_id}"
    
    try:
        data = await api_client.fetch_data(external_endpoint)
        return {"status": "success", "data": data}
    except HTTPException:
        raise
    except Exception as e:
        logger.error(f"Unexpected error: {e}")
        raise HTTPException(status_code=500, detail="Internal server error")

@app.get("/health")
async def health_check():
    return {"status": "healthy", "service": "external-api-proxy"}

# Run: uvicorn main:app --host 0.0.0.0 --port 8000

Node.js: API Gateway with Load Balancing

const express = require('express');
const axios = require('axios');
const rateLimit = require('express-rate-limit');

const app = express();
app.use(express.json());

// Microservices configuration
const services = {
  users: [
    { url: 'http://user-service-1:8001', healthy: true, activeConnections: 0 },
    { url: 'http://user-service-2:8001', healthy: true, activeConnections: 0 },
    // Add more services as needed
  ],
  orders: [
    { url: 'http://order-service-1:8002', healthy: true, activeConnections: 0 },
    { url: 'http://order-service-2:8002', healthy: true, activeConnections: 0 },
    // Add more services as needed
  ],
};

// Load balancing logic
const getService = (serviceName) => {
  const serviceList = services[serviceName];
  const healthyServices = serviceList.filter(service => service.healthy);
  if (healthyServices.length === 0) {
    throw new Error('No healthy services available');
  }
  // Simple round-robin load balancing
  const service = healthyServices[Math.floor(Math.random() * healthyServices.length)];
  service.activeConnections++;
  return service;
};

app.all('/api/users/*', async (req, res) => {
  const service = getService('users');
  try {
    const response = await axios({
      method: req.method,
      url: `${service.url}${req.path}`,
      data: req.body,
      headers: req.headers,
    });
    res.status(response.status).json(response.data);
  } catch (error) {
    res.status(500).json({ error: 'Service unavailable' });
  } finally {
    service.activeConnections--;
  }
});

// Similar route for orders and other services...

app.listen(3000, () => {
  console.log('API Gateway running on port 3000');
});
```