Skip to main content

Envoy Gateway: How I Stopped Worrying About Annotations and Learned to Love Gateway API

Константин Потапов
25 min

The story of how 47 lines of Nginx Ingress YAML magic turned into 15 lines of clear configuration. Envoy Gateway isn't just a new tool—it's freedom from pain.

The Story: How I Wasted 6 Hours on a Single Annotation

It was summer 2024. Friday, 6:30 PM. My production canary deployment was broken.

The task seemed simple: route 10% of traffic to the new API version, the rest to the old one. Standard pre-rollout testing practice.

Reality was brutal.

I was using Nginx Ingress. Found the documentation section about canary deployments via annotations. Copy-pasted the example:

# ❌ What I thought would work
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: api-canary
  annotations:
    nginx.ingress.kubernetes.io/canary: "true"
    nginx.ingress.kubernetes.io/canary-weight: "10"
spec:
  rules:
    - host: api.company.com
      http:
        paths:
          - path: /v2
            backend:
              service:
                name: api-v2
                port:
                  number: 8080

Applied it. Checked it. 100% of traffic going to the new version.

"Strange," I thought, "probably a typo in weight." Changed to 90. Restarted. Still 100% to v2.

The next 4 hours I:

  • Re-read the documentation 7 times
  • Googled "nginx ingress canary not working" (53 results, none helped)
  • Checked Nginx Ingress Controller version (turned out I needed 0.22+, had 0.21)
  • Updated the controller (broke 2 more Ingress resources in the process)
  • Discovered that canary needs a separate Ingress resource with the same host but different name
  • Realized I specified path matching incorrectly (needed regex, not prefix)

At 11 PM I finally made it work. 47 lines of YAML + 12 annotations. I didn't understand half of it.

Monday morning: "Hey, can we also add JWT authorization to that endpoint?"

I knew I was screwed.


The Triumph: 15 Lines Instead of 47, and Everything Makes Sense

A month later I learned about Envoy Gateway. Was skeptical: "Yet another tool to learn."

But decided to try it on a dev cluster.

Same canary deployment on Envoy Gateway:

# ✅ Works on first try
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: api-route
spec:
  parentRefs:
    - name: my-gateway
  hostnames:
    - api.company.com
  rules:
    - matches:
        - path:
            type: PathPrefix
            value: /v2
      backendRefs:
        - name: api-v1 # 90% traffic
          port: 8080
          weight: 90
        - name: api-v2 # 10% traffic (canary)
          port: 8080
          weight: 10

15 lines. Zero annotations. Worked on first try.

I stared at this YAML in disbelief. It was too simple.

But the real magic happened next.

A week later, Product Owner: "Add JWT authorization."

With Nginx Ingress, this would mean: 5 more annotations, external auth service, CORS setup, debugging why tokens don't pass.

With Envoy Gateway:

# Added 12 lines — JWT works
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
  name: jwt-policy
spec:
  targetRef:
    kind: HTTPRoute
    name: api-route
  jwt:
    providers:
      - name: auth0
        issuer: https://company.auth0.com/
        audiences:
          - api://myapp
        remoteJWKS:
          uri: https://company.auth0.com/.well-known/jwks.json

Applied. Worked. First try.

No magic strings in annotations. No googling "how to properly escape URLs in YAML strings." Everything typed, everything validated, everything clear.

At that moment I realized: I'm never going back to Nginx Ingress annotations.


What is Envoy Gateway (No Fluff)

Envoy Gateway is a Kubernetes-native API Gateway built on three pillars:

1. Envoy Proxy — The Engine That Doesn't Suck

This is a high-performance proxy from Lyft, used by Google, Uber, Netflix. The same Envoy that powers Istio.

What it does out of the box:

  • HTTP/1.1, HTTP/2, HTTP/3, gRPC, WebSocket
  • Load balancing with smart algorithms (consistent hashing, least request)
  • Circuit breaking, retry, timeout — no plugins
  • Rate limiting, JWT validation — native
  • Observability — metrics to Prometheus, traces to Jaeger, all automatic

Analogy: Nginx is like a Toyota Corolla. Reliable, proven, but basic. Envoy is like a Tesla Model S. Modern, packed with tech, but harder to master. Envoy Gateway is Tesla with autopilot. All the power of Envoy, but simple control.

2. Gateway API — Standard Instead of Chaos

Imagine: Kubernetes released a new official API for managing ingress traffic. Not Ingress (it's 7 years old and outdated). Gateway API is Ingress 2.0.

Key difference:

Ingress (old)Gateway API (new)
String annotations (no validation)Typed fields (validation on the fly)
Each controller — own syntaxSingle standard for all
Migration = rewriting annotationsMigration = changing gatewayClassName
Advanced features = hacksAdvanced features = native CRDs

In simple terms: Ingress is like sending letters with instructions to a courier. "If you see a house with a red roof, deliver the package to John." Gateway API is an API with a clear contract. { "recipient": "John", "address": { "color": "red" } }. Validated, won't break.

3. Control Plane — Orchestra Instead of Chaos

This is the brain of the system. You write declarative configuration (Gateway, HTTPRoute). Control Plane:

  • Creates and manages Envoy Proxy pods
  • Translates your manifests into Envoy configuration (xDS protocol)
  • Updates configuration without proxy restart (hot reload)
  • Integrates with cert-manager, external-dns, Prometheus

You describe "what you want." It does "how to achieve it."

Envoy ProxyKubernetesGateway APIcert-manager

Three Cases When Envoy Gateway Will Save Your Ass

Case 1: Canary Deployment (Already Told, But Will Repeat)

Without Envoy Gateway:

  • 47 lines of YAML
  • 12 annotations
  • 6 hours of debugging
  • Documentation contradicts reality
  • One symbol in wrong place — everything breaks

With Envoy Gateway:

  • 15 lines of YAML
  • 0 annotations
  • Works first try
  • If you make a mistake — Kubernetes API gives error before applying
Nginx Ingress
Envoy Gateway
Lines of YAML
47 lines
15 lines
68%
Annotations
12 magic strings
0
100%
Setup time
6 hours + Google
10 minutes
67%

Case 2: JWT + Rate Limiting in 5 Minutes

Task: API must validate JWT tokens and limit 100 requests per minute per user.

Nginx Ingress: Add 3 annotations for JWT (if the controller supports it). For rate limiting — deploy Redis, configure Lua script, pray it works.

Envoy Gateway: Two manifests. Everything out of the box.

# JWT — 12 lines
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
  name: jwt-policy
spec:
  targetRef:
    kind: HTTPRoute
    name: api-route
  jwt:
    providers:
      - name: auth0
        issuer: https://mycompany.auth0.com/
        audiences:
          - api://myapp
        remoteJWKS:
          uri: https://mycompany.auth0.com/.well-known/jwks.json
 
---
# Rate Limiting — 15 lines
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
  name: rate-limit
spec:
  targetRef:
    kind: HTTPRoute
    name: api-route
  rateLimit:
    type: Local
    local:
      rules:
        - clientSelectors:
            - headers:
                - name: x-user-id
                  type: Distinct
          limit:
            requests: 100
            unit: Minute

Applied. Worked.

Envoy Gateway:

  • Validates JWT on every request
  • Caches public keys (JWKS) automatically
  • Limits requests in memory (no Redis!)
  • Exports metrics (how many rejected, how many passed)

And all this without a single line of code on your side.

Case 3: gRPC with Metrics and Health Checks

Problem: You have a gRPC service. Nginx Ingress supports gRPC, but:

  • No gRPC health checks (have to write a hack)
  • Metrics — manual
  • Load balancing — basic round-robin

Envoy Gateway understands gRPC natively:

apiVersion: gateway.networking.k8s.io/v1
kind: GRPCRoute
metadata:
  name: user-service
spec:
  parentRefs:
    - name: my-gateway
  hostnames:
    - grpc.example.com
  rules:
    - matches:
        - method:
            service: myapp.v1.UserService
            method: GetUser
      backendRefs:
        - name: user-service
          port: 9090

Bonuses out of the box:

  • Metrics per gRPC method (latency GetUser, error rate CreateUser)
  • gRPC health checks (configured automatically)
  • Retry for failed requests
  • Circuit breaking on overload

Envoy was originally created at Lyft for gRPC services. It's its native element. Nginx added gRPC support later, as an "additional feature."


When You DON'T Need Envoy Gateway (Honestly)

I'm not an evangelist. Envoy Gateway is a powerful tool, but not for everyone.

DON'T use Envoy Gateway if:

You have a monolith with 1-2 services — Nginx Ingress is simpler and faster to set up

Kubernetes < 1.25 — Gateway API v1 appeared only in 1.25

Team isn't ready to learn new things — migration requires rewriting all Ingress to HTTPRoute

Need full Service Mesh — if you need mTLS between all services, use Istio

Maximum simplicity is critical — "one Ingress for entire cluster" is simpler than figuring out Gateway/HTTPRoute/Policy

Use Envoy Gateway if:

Need advanced features — canary, A/B testing, JWT, rate limiting, circuit breaking

Portability is important — today Envoy Gateway, tomorrow Istio Gateway, manifests are the same

Tired of vendor lock-in — each Ingress Controller's annotations are unique

Planning to grow — Envoy Gateway is a good step before full Service Mesh

Need observability — metrics, traces, logs out of the box


Architecture: How It Works Inside

Envoy Gateway is two layers: Control Plane (brain) and Data Plane (muscles).

┌─────────────────────────────────────────────────────┐
│  Kubernetes Cluster                                 │
├─────────────────────────────────────────────────────┤
│                                                     │
│  ┌─────────────────────────────────┐                │
│  │ Control Plane (envoy-gateway)   │                │
│  │                                 │                │
│  │  ┌──────────────────────────┐   │                │
│  │  │ Gateway API Controller   │   │                │
│  │  │ (watches Gateway, Route) │   │                │
│  │  └────────┬─────────────────┘   │                │
│  │           │                     │                │
│  │           ▼                     │                │
│  │  ┌──────────────────────────┐   │                │
│  │  │ xDS Translator           │   │                │
│  │  │ (Gateway API → Envoy cfg)│   │                │
│  │  └────────┬─────────────────┘   │                │
│  │           │ xDS (gRPC)          │                │
│  └───────────┼─────────────────────┘                │
│              │                                      │
│              ▼                                      │
│  ┌─────────────────────────────────┐                │
│  │ Data Plane (envoy-proxy pods)   │                │
│  │                                 │                │
│  │  ┌────────┐  ┌────────┐         │                │
│  │  │ Envoy  │  │ Envoy  │  ...    │                │
│  │  │ Pod 1  │  │ Pod 2  │         │                │
│  │  └───┬────┘  └───┬────┘         │                │
│  │      │           │              │                │
│  └──────┼───────────┼──────────────┘                │
│         │           │                               │
│         │  Ingress Traffic (HTTP/gRPC)              │
│         ▼           ▼                               │
│  ┌─────────────────────────────────┐                │
│  │ Backend Services (Pods)         │                │
│  │  app-v1, app-v2, user-service   │                │
│  └─────────────────────────────────┘                │
│                                                     │
└─────────────────────────────────────────────────────┘

Control Plane (one pod in namespace envoy-gateway-system):

  • Watches your Gateway, HTTPRoute, GRPCRoute manifests
  • Translates them into Envoy configuration (xDS protocol)
  • Manages lifecycle of Envoy Proxy pods (creates, updates, deletes)

Data Plane (N Envoy Proxy pods):

  • Accept traffic from the internet
  • Apply routing rules, rate limiting, JWT validation
  • Proxy to backend services
  • Export metrics, traces, logs

Key difference from Nginx Ingress:

Nginx Ingress:

User → Nginx Pod (reads Ingress YAML) → Service → Pods

Envoy Gateway:

User → Envoy Proxy Pods (get config from Control Plane) → Service → Pods
          ↑
    xDS config (gRPC)
          ↑
    Control Plane (reads Gateway API YAML)

Advantages:

  • Data Plane scales independently (more Envoy pods = more throughput)
  • Hot reload of configuration without proxy restart
  • Unified config format (xDS) — can replace Envoy Gateway with another controller

Practice: From Zero to Canary in 15 Minutes

Enough theory. Let's get hands-on.

Step 1: Installation (3 minutes)

Requirements:

  • Kubernetes 1.25+
  • kubectl configured

Install via kubectl:

# Gateway API CRDs
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.0.0/standard-install.yaml
 
# Envoy Gateway
kubectl apply -f https://github.com/envoyproxy/gateway/releases/download/latest/install.yaml
 
# Check
kubectl get pods -n envoy-gateway-system
 
# Should see pod: envoy-gateway-xxxxx (STATUS: Running)

After installation, a GatewayClass named envoy-gateway will appear. This is the template from which your Gateways will be created.

Step 2: Deploy Test Application (2 minutes)

Let's create two versions of an echo service for canary demonstration.

# Create echo-app.yaml
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Namespace
metadata:
  name: demo
 
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: echo-v1
  namespace: demo
spec:
  replicas: 2
  selector:
    matchLabels:
      app: echo
      version: v1
  template:
    metadata:
      labels:
        app: echo
        version: v1
    spec:
      containers:
        - name: echo
          image: hashicorp/http-echo:latest
          args: ["-text=Hello from v1"]
          ports:
            - containerPort: 5678
 
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: echo-v2
  namespace: demo
spec:
  replicas: 2
  selector:
    matchLabels:
      app: echo
      version: v2
  template:
    metadata:
      labels:
        app: echo
        version: v2
    spec:
      containers:
        - name: echo
          image: hashicorp/http-echo:latest
          args: ["-text=Hello from v2 (canary!)"]
          ports:
            - containerPort: 5678
 
---
apiVersion: v1
kind: Service
metadata:
  name: echo-v1
  namespace: demo
spec:
  selector:
    app: echo
    version: v1
  ports:
    - port: 80
      targetPort: 5678
 
---
apiVersion: v1
kind: Service
metadata:
  name: echo-v2
  namespace: demo
spec:
  selector:
    app: echo
    version: v2
  ports:
    - port: 80
      targetPort: 5678
EOF
 
# Check
kubectl get pods -n demo
# Should see 4 pods: echo-v1-xxx (2), echo-v2-xxx (2)

Step 3: Create Gateway (2 minutes)

Gateway is the entry point for traffic. LoadBalancer equivalent.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: demo-gateway
  namespace: demo
spec:
  gatewayClassName: envoy-gateway
  listeners:
    - name: http
      protocol: HTTP
      port: 80
      allowedRoutes:
        namespaces:
          from: Same
EOF
 
# Wait for Gateway to be ready
kubectl wait --for=condition=Programmed gateway/demo-gateway -n demo --timeout=300s
 
# Get IP/Hostname
export GATEWAY_IP=$(kubectl get gateway demo-gateway -n demo -o jsonpath='{.status.addresses[0].value}')
echo "Gateway IP: $GATEWAY_IP"

In cloud K8s (GKE, EKS, AKS), Gateway will get a LoadBalancer with external IP. In local clusters (minikube, kind), use kubectl port-forward for access.

Step 4: Create HTTPRoute with Canary (3 minutes)

Configure traffic split 90% → v1, 10% → v2.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: echo-route
  namespace: demo
spec:
  parentRefs:
    - name: demo-gateway
  hostnames:
    - echo.example.com
  rules:
    - matches:
        - path:
            type: PathPrefix
            value: /
      backendRefs:
        - name: echo-v1
          port: 80
          weight: 90  # 90% traffic to v1
        - name: echo-v2
          port: 80
          weight: 10  # 10% traffic to v2 (canary)
EOF
 
# Check status
kubectl get httproute -n demo
# STATUS: Accepted

Step 5: Test (5 minutes)

Make 20 requests and see distribution:

# If you have external IP:
for i in {1..20}; do
  curl -H "Host: echo.example.com" http://$GATEWAY_IP/
done
 
# If using port-forward (local cluster):
kubectl port-forward -n demo svc/demo-gateway-envoy-gateway 8080:80 &
for i in {1..20}; do
  curl -H "Host: echo.example.com" http://localhost:8080/
done
 
# Result (approximately):
# Hello from v1 (18 out of 20 ≈ 90%)
# Hello from v2 (canary!) (2 out of 20 ≈ 10%)

🎉 Congratulations! You just configured canary deployment in 15 minutes.

No magic annotations. No googling. Everything works.


Bonus: Add Rate Limiting in 2 Minutes

Protect our API from DDoS. Limit: 5 requests per minute per user.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
  name: rate-limit
  namespace: demo
spec:
  targetRef:
    group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: echo-route
  rateLimit:
    type: Local
    local:
      rules:
        - clientSelectors:
            - headers:
                - name: x-user-id
                  type: Distinct
          limit:
            requests: 5
            unit: Minute
EOF
 
# Test (make 10 requests with same user_id)
for i in {1..10}; do
  curl -H "Host: echo.example.com" -H "x-user-id: user123" http://$GATEWAY_IP/
done
 
# First 5 requests: HTTP 200 OK
# Next 5: HTTP 429 Too Many Requests

Works. No Redis. No external dependencies.

Envoy stores counters in memory and syncs between replicas.

Rate limiting works out of the box. For production, you can switch to Global mode with Redis for syncing between multiple Gateway pods.


Pitfalls (So You Don't Step on Them)

Pitfall 1: Gateway API Versions

Problem: Gateway API is actively evolving. Experimental features (SecurityPolicy, BackendTrafficPolicy) may change.

Solution:

  • Use stable v1 resources (Gateway, HTTPRoute, GRPCRoute)
  • For experimental features, check the compatibility matrix
  • Pin Envoy Gateway version in production

Pitfall 2: TLS Certificates

Problem: You configured HTTPS, but certificates need manual renewal every 90 days.

Solution: Use cert-manager for automatic renewal:

# Install cert-manager
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.13.0/cert-manager.yaml
 
# ClusterIssuer for Let's Encrypt
cat <<EOF | kubectl apply -f -
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: admin@example.com
    privateKeySecretRef:
      name: letsencrypt-prod-key
    solvers:
      - http01:
          gatewayHTTPRoute:
            parentRefs:
              - name: demo-gateway
                namespace: demo
EOF
 
# Gateway with TLS
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: demo-gateway
  namespace: demo
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
  gatewayClassName: envoy-gateway
  listeners:
    - name: https
      protocol: HTTPS
      port: 443
      hostname: "*.example.com"
      tls:
        mode: Terminate
        certificateRefs:
          - name: example-com-tls
EOF

cert-manager will automatically create and renew certificates.

Pitfall 3: Observability

Problem: In production, you need to see metrics (latency, error rate, throughput).

Solution: Envoy exports metrics to Prometheus out of the box.

Key metrics:

  • envoy_http_downstream_rq_total — total requests
  • envoy_http_downstream_rq_xx — requests by status codes (2xx, 4xx, 5xx)
  • envoy_http_downstream_rq_time — latency (p50, p95, p99)
  • envoy_cluster_upstream_rq_retry — number of retries

Ready Grafana dashboard: Envoy Gateway Overview


Production Checklist (So You Don't Wake Up to Alerts)

Before launching to production, go through the checklist:

  • High Availability: minimum 2 Envoy Proxy pod replicas
  • Resource Limits: CPU/memory limits on Envoy pods (so they don't eat the whole cluster)
  • TLS Termination: cert-manager configured for HTTPS
  • Rate Limiting: protection from DDoS and abuse
  • Circuit Breaking: protection of backend from cascading failures
  • Observability: metrics in Prometheus, alerts configured
  • Graceful Shutdown: preStop hooks for finishing active connections
  • Security Policies: JWT validation, CORS, request validation
  • Backup Configuration: all manifests in Git
  • Testing: integration tests for critical routes

Production Gateway Example

apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: production-gateway
  namespace: production
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
  gatewayClassName: envoy-gateway
  listeners:
    # HTTP → HTTPS redirect
    - name: http
      protocol: HTTP
      port: 80
      hostname: "*.example.com"
 
    # HTTPS with automatic certificates
    - name: https
      protocol: HTTPS
      port: 443
      hostname: "*.example.com"
      tls:
        mode: Terminate
        certificateRefs:
          - name: wildcard-tls
 
---
# Automatic redirect to HTTPS
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: https-redirect
  namespace: production
spec:
  parentRefs:
    - name: production-gateway
      sectionName: http
  hostnames:
    - "*.example.com"
  rules:
    - filters:
        - type: RequestRedirect
          requestRedirect:
            scheme: https
            statusCode: 301

Migration from Nginx Ingress (Without Pain)

Reader's question: "I have 30 Ingress resources in production. How do I migrate?"

Answer: Gradually. Not head-on.

"Parallel Run" Strategy

Phase 1: Proof of Concept (one week)

  1. Install Envoy Gateway in parallel with Nginx Ingress
  2. Create Gateway with separate LoadBalancer IP
  3. Migrate one non-critical service to HTTPRoute
  4. Test for a week — monitor metrics, error rate, latency

Phase 2: Traffic Split (2 weeks)

  1. Configure DNS weighted routing: 10% → Envoy Gateway, 90% → Nginx Ingress
  2. Monitor: error rate, latency, throughput
  3. Gradually increase Envoy Gateway weight (10% → 25% → 50% → 75% → 100%)
  4. Rollback to Nginx on issues (DNS switch in 5 minutes)

Phase 3: Full Migration (one month)

  1. Migrate all services one by one
  2. Delete Nginx Ingress after 2 weeks without incidents
  3. Document new patterns for the team

Translate Ingress → HTTPRoute

Before (Nginx Ingress):

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: api-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /$2
    nginx.ingress.kubernetes.io/rate-limit: "100"
spec:
  rules:
    - host: api.example.com
      http:
        paths:
          - path: /api(/|$)(.*)
            backend:
              service:
                name: api-service
                port:
                  number: 8080

After (Envoy Gateway):

apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: api-route
spec:
  parentRefs:
    - name: production-gateway
  hostnames:
    - api.example.com
  rules:
    - matches:
        - path:
            type: PathPrefix
            value: /api
      filters:
        - type: URLRewrite
          urlRewrite:
            path:
              type: ReplacePrefixMatch
              replacePrefixMatch: /
      backendRefs:
        - name: api-service
          port: 8080
 
---
# Rate limiting via separate policy
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
  name: api-rate-limit
spec:
  targetRef:
    kind: HTTPRoute
    name: api-route
  rateLimit:
    type: Local
    local:
      rules:
        - limit:
            requests: 100
            unit: Minute

What changed:

  • ✅ Annotations → typed fields
  • ✅ Regex in path → explicit PathPrefix
  • ✅ Rate limiting → separate policy (reusable)
  • ✅ Validation at Kubernetes API level

This Isn't About Technology. This Is About Freedom.

When I first set up Envoy Gateway, I felt relief.

Not excitement. Not euphoria. Just relief.

Relief from no longer having to:

  • Google "how to properly write regex in nginx annotation"
  • Re-read documentation 7 times to understand one parameter
  • Pray that after controller update annotations don't break
  • Explain to a junior why "just adding canary" will take 4 hours

Envoy Gateway isn't about "new technology."

It's about freedom from pain.

Pain you didn't even notice because you got used to it. Like getting used to a squeaky door hinge — annoying, but tolerable.

Then you oil the hinge. And realize: God, how did I live with that squeak?

Here's the power of Envoy Gateway:

Not that it's "faster" (though it is). Not that it's "more powerful" (though it is).

But that it's understandable.

You read HTTPRoute — and see the logic. You write SecurityPolicy — and it works first try. You add RateLimit — and sleep soundly knowing DDoS won't kill your API.

This is freedom from magic.

Freedom from "why doesn't this annotation work?" Freedom from "where's the documentation for this controller version?" Freedom from "does anyone even know how to configure this?"

Gateway API is a standard.

And a standard means:

  • Documentation is the same for all controllers
  • Validation at Kubernetes API level
  • Migration between implementations without rewriting manifests
  • Knowledge transfers between projects and companies

You're not learning "Nginx Ingress annotations." You're learning Kubernetes Gateway API.

And this knowledge works with Envoy Gateway, Istio Gateway, Cilium Gateway, Kong Gateway.

This is an investment in the future, not vendor lock-in.


Start Today

I know what you're thinking right now.

"Sounds cool, but I don't have time to figure out a new tool."

"Nginx Ingress is in production. It works. Why change?"

"I'll try it someday. Adding to TODO."

I understand you perfectly. I thought the same.

But here's what I realized after migration:

Every day you postpone the transition, you:

  • Waste extra hours debugging annotations
  • Limit yourself to old API capabilities
  • Increase tech debt that will be harder to migrate later

You don't need to migrate all of production right now.

Start with one step:

Your Assignment for Today (30 minutes)

  1. Spin up a local cluster (minikube or kind)
  2. Install Envoy Gateway (3 kubectl commands)
  3. Repeat the example from the article (canary deployment)
  4. Feel the difference

Don't read further — open your terminal right now.

# 1. Local cluster (if you don't have one)
minikube start
# or
kind create cluster
 
# 2. Install Envoy Gateway
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.0.0/standard-install.yaml
kubectl apply -f https://github.com/envoyproxy/gateway/releases/download/latest/install.yaml
 
# 3. Copy examples from "Practice" section above
# 4. Make 20 curl requests
# 5. See 90/10 traffic distribution
 
# Whole process: 15 minutes

Why it's important to do this today:

In 30 minutes you'll feel the difference. Not read about it. Not take my word for it. But experience firsthand what it's like when everything works on the first try.

And then you'll understand that Envoy Gateway isn't hype. It's evolution.


P.S. If you read to the end — you're already in the top 10% of engineers who don't ignore new technologies.

But knowledge without action is just entertainment.

Open the terminal. Spend 30 minutes. Feel the difference.

And tomorrow at standup say: "I tried Envoy Gateway yesterday. Guys, this is a game changer."

P.P.S. The most honest test of your mastery:

If you can't explain to your PM why Envoy Gateway is better than Nginx Ingress in terms of business value (not technical details) — you don't understand the tool deeply enough yet.

Go back to the "Three Cases" section. Retell them in your own words.

Because ultimately technologies don't matter.

What matters is what problems they solve and how much time they save your team.

Envoy Gateway saves me 4-6 hours per week on configuration and debugging.

Count your hours. And decide if 30 minutes for an experiment is worth it.

I believe it is.


See also: