Why Use Proxies for API Access When API Gateways Exist?

Why Use Proxies for API Access When API Gateways Exist? image

The proliferation of microservices and distributed systems has led to an increased focus on how to efficiently and securely route traffic within an organization’s IT infrastructure. Two common solutions have emerged to streamline API consumption: API proxies and API gateways.

  • API gateways typically handle concerns like user authentication, rate limiting, request routing among multiple services, and advanced orchestration.
  • API proxies often focus on simpler tasks such as request/response transformation, caching, and masking backend service URLs.

At first glance, one might assume that if you already have an API gateway, there is no need for a standalone API proxy. However, there are scenarios where having a separate proxy layer can yield significant benefits in terms of performance, security, and flexibility. This guide will explore those scenarios in detail, clarify the architecture of proxies and gateways, and provide practical examples of how to set them up.

What Is an API Proxy?

An API proxy is a lightweight intermediary that sits between your clients (e.g., web or mobile applications) and your backend services. Its main role is to forward incoming requests to the appropriate backend, optionally transforming or filtering these requests along the way. Because of its streamlined responsibilities, an API proxy often adds minimal overhead to the request flow.

Common Use Cases

  1. Request and Response Transformation
    Proxies can modify headers or payloads in-flight. For instance, if your internal service expects a different header name or a proprietary format, you can transform the request through the proxy without forcing your client to change.
  2. Security Enforcement
    Although not as feature-rich as a gateway’s security features, an API proxy can still perform basic rate limiting, block known malicious IPs, and forward requests only if they meet certain simple criteria.
  3. Logging and Monitoring
    Proxies commonly log requests and forward those logs to a central system for auditing and analytics.
  4. URL Masking
    By routing traffic through a single domain (e.g., api.company.com) you avoid exposing internal domain names (e.g., internal.services.company.com).

Example Scenario

Imagine a situation where you want to expose an internal API located at internal.example.com.

You do not wish to reveal this internal domain to public users or external partners. So you set up a proxy at proxy.example.com using tools like FineProxy.org. The proxy intercepts and forwards incoming requests to internal.example.com, thereby acting as a protective and masking layer.

What Is an API Gateway?

An API gateway is a more comprehensive solution that encompasses a variety of API management functions. Gateways handle:

  • Traffic Management: Intelligent routing to different microservices.
  • Authentication and Authorization: OAuth2, JWT, API keys, custom tokens, etc.
  • Rate Limiting and Quotas: Preventing service overload by controlling request frequency.
  • Caching and CDN Integration: Improving response times for popular endpoints.
  • Analytics and Monitoring: Offering dashboards to visualize API usage.
  • Sometimes Orchestration: Composing multiple microservice calls into a single response (though not all gateways handle orchestration).

Common Use Cases

  1. Centralized API Management
    Organizations with dozens or hundreds of microservices can centralize configuration and policies.
  2. Security and Access Control
    A gateway can enforce advanced security protocols, multi-factor authentication, or custom SSO solutions.
  3. Microservices Architecture Support
    Gateways thrive in microservices environments by routing each API call to the correct backend microservice based on paths or domains.

Example Scenario

Consider an e-commerce platform:

  • Orders Service at orders.example.com
  • Payments Service at payments.example.com
  • Inventory Service at inventory.example.com

A single gateway endpoint (e.g., api.example.com) can route /orders requests to the Orders Service, /payments requests to the Payments Service, and so forth, simplifying the client’s integration process.

Why Use an API Proxy When an API Gateway Exists?

If an API gateway is already providing traffic management and security features, one might ask: What added value does a proxy bring? There are key scenarios where a proxy can complement or even improve the existing gateway setup.

Performance Optimization

Proxies can reduce the load on gateways by handling certain tasks before requests reach the gateway:

  • Caching of frequently accessed data.
  • Compression of large payloads.
  • Minimizing overhead if only lightweight transformations are needed.

Example: A company dealing with large JSON payloads might compress responses at the proxy layer, significantly reducing bandwidth usage. The API gateway then deals with requests that have smaller footprints, lowering resource consumption.

Security and Anonymization

Even though gateways typically have robust security features, a separate proxy layer can serve as an extra shield:

  • Blocking known malicious IPs using shared blacklists.
  • Filtering out suspicious requests based on specific patterns.
  • Masking the gateway URL so that only the proxy endpoint is public.

Example: A fintech company handles sensitive financial data. They deploy a proxy in a DMZ (DeMilitarized Zone) that performs IP filtering and basic traffic inspection. Only validated requests are forwarded to the internal API gateway, thereby reducing the attack surface.

Traffic Splitting and Load Balancing

Proxies can function as a flexible traffic manager:

  • A/B Testing: Sending a small percentage of traffic to a new backend version.
  • Blue-Green Deployments: Switching traffic between two mirrored environments with zero downtime.

Example: A SaaS company wants to beta-test a new API version without exposing it to all users. The proxy directs 20% of incoming requests to the new version, with the gateway still receiving requests for the stable version.

Geo-Distributed Access

When user bases are global, you can place proxies in multiple regions to reduce latency:

  • Regional Proxy Nodes: Users connect to their nearest proxy.
  • Edge Caching: Frequently accessed data is cached closer to end-users.

Example: A streaming platform might deploy proxies in North America, Europe, and Asia. Requests are forwarded to the nearest regional proxy, which then communicates with the central API gateway, improving response times.

Vendor Lock-In Mitigation

An additional proxy layer offers flexibility if you decide to switch or upgrade your API gateway technology:

  • Unified Entry Point: The proxy endpoint remains the same even if the backend gateway changes.
  • Minimal Disruption: Only the proxy configuration might need slight adjustments to route to a new gateway.

Example: A company might move from AWS API Gateway to Kong for cost or feature reasons. Rather than changing client integrations, they simply update the proxy’s backend target to point to the new gateway.

When to Choose an API Proxy vs. an API Gateway

Below is a simplified comparison of major features and use cases. Keep in mind that many proxies can be extended to emulate some gateway features, and vice versa.

FeatureAPI ProxyAPI Gateway
Traffic ManagementLimitedAdvanced (Routing, Rate Limiting)
SecurityBasic (Masking, Filtering)Strong (JWT/OAuth2, DDoS Protection)
Performance OptimizationHigh (Caching, Compression)Medium (Built-in Caching)
Microservices SupportLimitedExcellent
Use CaseLightweight transformations, security maskingCentralized API management, orchestration, advanced security

How API Proxies and API Gateways Work Internally

Internal Architecture of an API Proxy

  • Listener: Listens on a specific port (e.g., 80, 443) for incoming requests.
  • Routing/Forwarding: Directs the request to a configured backend.
  • Optional Transformations: Modifies headers or payload if configured.
  • Response Handling: Passes the response back to the client, sometimes altering headers or status codes.

Because the feature set is minimal, proxies like Nginx or HAProxy are often used in this role.

Internal Architecture of an API Gateway

  • Policy Engine: Applies rules on authentication, rate limiting, and route selection.
  • Plugin or Extension Layer: Offers transformations, logging, or custom scripts.
  • Service Discovery: Dynamically routes requests to available service instances (useful in microservices).
  • Analytics: Captures metrics on request volume, latency, and errors.

API gateways generally have a more robust and modular architecture, allowing you to plug in new policies or analytics tools.

Reverse Proxies vs. API Proxies

A reverse proxy is a server (e.g., Nginx, HAProxy) that typically sits in front of one or more web servers, caching or load-balancing requests. An API proxy, on the other hand, is specialized for API requests (often HTTP/HTTPS calls returning JSON or XML). While you can use a reverse proxy as an API proxy, the latter may have API-specific features like request method transformations or payload rewriting rules.

Example Request Flows

API Proxy

  1. User → proxy.example.com/api/v1/users
  2. Proxy rewrites/forwards → backend.example.com/users
  3. Backend Service responds → proxy.example.com transforms or strips headers
  4. Proxy returns final response to User

API Gateway

  1. User → api.example.com/orders
  2. Gateway checks authentication, rate limits, routes request → orders-service
  3. Orders Service returns data → Gateway
  4. Gateway (optionally aggregates or transforms) returns data to User

Performance Considerations

Latency Overhead

  • Proxy: Typically introduces minimal latency, often in the range of milliseconds.
  • Gateway: Additional processing layers (auth checks, rate limiting) can add more overhead.

Using both can amplify latency if not configured carefully. It’s crucial to measure the performance impact of adding another hop in the request lifecycle.

Caching Mechanisms

  • API Proxy: Commonly uses single-node caching mechanisms like Varnish or built-in Nginx caching.
  • API Gateway: May employ distributed caching through Redis, or rely on CDNs.

Deciding where to cache depends on how often data changes, the number of user regions, and how advanced your caching requirements are.

Rate Limiting Approaches

  • API Gateways: Typically implement more sophisticated algorithms such as token bucket or leaky bucket with user-level granularity.
  • API Proxies: Might rely on simpler approaches (e.g., Nginx limit_req).

In a layered architecture, you can do a broad traffic filter at the proxy and more fine-grained control at the gateway.

API Proxy and API Gateway Implementation Examples

API Proxy Setup with Nginx

Nginx is a popular choice for setting up a simple API proxy due to its performance and extensive configuration options.

Basic Nginx Configuration

server {

listen 80;

server_name api.example.com;

location / {

proxy_pass http://backend.example.com;

proxy_set_header Host $host;

proxy_set_header X-Real-IP $remote_addr;

# Optional caching configuration

# proxy_cache my_cache;

# proxy_cache_valid 200 10m;

# proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;

}

}

Explanation:

  • listen 80; tells Nginx to listen on port 80.
  • server_name api.example.com; sets the domain to match requests for api.example.com.
  • location / captures all requests to api.example.com/.
  • proxy_pass http://backend.example.com; forwards requests to the backend service.
  • proxy_set_header directives ensure the correct Host header is passed and the client’s real IP is logged on the backend.

You can enable optional caching within the same block or within a higher-level HTTP block. This setup masks the backend’s URL and can also be enhanced to include security headers, rate limiting, or basic authentication.

API Gateway with Kong

Kong is an open-source API gateway known for its plugin architecture, allowing you to add features such as authentication, rate limiting, and transformations without modifying your underlying services.

  1. Install Kong: You can run Kong in Docker or install it on a Linux-based OS.
  2. Configure a Service: This maps to your actual backend service.

curl -i -X POST http://localhost:8001/services/ \

--data "name=users-api" \

--data "url=http://backend.example.com/users"

  1. Configure a Route: This tells Kong how to route incoming requests on a specific path to the users-api service.

curl -i -X POST http://localhost:8001/routes/ \

--data "paths[]=/users" \

--data "service.name=users-api"

  1. Add Plugins (Optional): For instance, you can enable rate limiting or JWT authentication.

curl -i -X POST http://localhost:8001/services/users-api/plugins \

--data "name=jwt"

Explanation:

  • A “service” in Kong parlance corresponds to an upstream API or microservice.
  • A “route” specifies how incoming requests (e.g., api.example.com/users) are matched and which “service” they direct to.
  • Plugins (e.g., JWT, rate limiting) can be applied either globally, per service, or per route.

Additional Notes on Cloud Provider Gateways

Many organizations use managed gateways like AWS API Gateway, Azure API Management, or Google Cloud Endpoints. These services offer serverless scaling, integrated security, and pay-per-usage models. When used in conjunction with a proxy layer (often a CDN or an edge proxy service like Cloudflare), you can achieve both edge-level performance optimizations and robust backend management.

API Security: Proxy vs. Gateway

Authentication and Authorization

  • API Gateway: A gateway often includes direct support for OAuth2, JWT, or integration with an Identity Provider (IdP).
  • API Proxy: Limited or basic auth mechanisms, relying more on IP whitelisting or pass-through authentication.

In a layered setup, you might do coarse-grained filtering at the proxy layer (e.g., block all traffic except from certain IP ranges) and more fine-grained policy enforcement at the gateway (e.g., validate JWT tokens).

DDoS Protection

  • Gateways: Many gateways (e.g., Kong, Tyk, Apigee) can integrate with a WAF (Web Application Firewall) or advanced DDoS protection services.
  • Proxies: Tools like Nginx or HAProxy can perform rate limiting, connection limiting, or integration with external threat intelligence.

TLS Termination

Terminating TLS (HTTPS) can happen at multiple layers:

  1. At the Proxy: Offloads decryption overhead from the gateway.
  2. At the Gateway: Centralizes certificate management if the gateway is the single entry point.
  3. End-to-End Encryption: TLS termination happens at the proxy, then re-encrypts traffic towards the gateway for an extra layer of security.

Multi-Layer API Architecture Combining Proxy and Gateway

Often, the ideal solution isn’t an either/or scenario. Organizations may combine an API proxy with an API gateway to balance performance, security, and operational flexibility.

Example Hybrid Setup

Client → API Proxy (Cloudflare or Nginx) → API Gateway (Kong) → Backend Service

  • API Proxy (Cloudflare/Nginx):
    • Edge caching
    • Rate limiting (simplistic or IP-based)
    • Bot filtering or blacklisting
  • API Gateway (Kong):
    • Auth token verification
    • Advanced rate limiting & quota management
    • Traffic routing to multiple microservices

This layered approach is particularly common in enterprise environments. It allows you to keep the gateway’s configuration stable while dynamically changing proxy rules for performance or security patches.

Monitoring and Observability

Logging and Analytics

  • API Gateway: Typically integrates with solutions like Prometheus, Grafana, or Elastic Stack to provide real-time dashboards.
  • API Proxy: Logs at the network level (request/response times, HTTP status codes). Tools like Cloudflare or Nginx can forward logs to a centralized system.

Best Practice: Aggregate logs from both layers (proxy and gateway) so you can have a complete view of the request lifecycle. This helps in root cause analysis and performance tuning.

Tracing with OpenTelemetry

When debugging microservices, distributed tracing is invaluable:

  • API Gateway: Initiates or propagates a trace ID to downstream services.
  • API Proxy: Forwards or adds trace headers (e.g., X-Request-ID, X-B3-TraceId).

Tools like Jaeger or Zipkin can visualize how requests flow through proxies, gateways, and microservices, showing where latency accumulates.

Error Handling

  • API Proxy: Often returns generic HTTP error codes like 502 Bad Gateway if the backend is unreachable.
  • API Gateway: Can provide more detailed error responses, including custom error bodies or user-friendly messages.

Tip: Handle sensitive errors (like stack traces) carefully. In production, your proxy or gateway should sanitize or replace internal error messages with generic responses to avoid leaking implementation details.

Real-World Use Cases

Here are some concrete examples of how companies employ both an API proxy and an API gateway.

  1. High-Traffic E-Commerce Platform:
    • Proxy sits at the edge for caching and quick traffic filtering.
    • Gateway handles routing to different microservices (catalog, shopping cart, user profiles).
  2. Global SaaS Provider:
    • Proxy nodes deployed in multiple geographic regions to reduce latency for local users.
    • Gateway in a centralized data center to ensure consistent authentication policies and rate limiting.
  3. Fintech with Strict Compliance:
    • Proxy in a DMZ environment for IP blacklisting, logging, and compliance checks.
    • Gateway behind the firewall for advanced user-level authentication and internal service orchestration.

In each of these scenarios, layering a proxy in front of a gateway allows for performance boosts, improved security posture, and modular architectural design.

Be the first to comment!