Solving Kong latency problems in Kubernetes

Solving Kong latency problems in Kubernetes

Kong is a popular API gateway that can be used as a reverse proxy for clients to access back-end services. It can be run as a Docker container and, as such, can be deployed to Kubernetes.

The following is an example manifest to do so:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: kong
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kong
  template:
    metadata:
      labels:
        app: kong
    spec:
      containers:
        - name: kong
          image: kong:3.4.0
          ports:
            - containerPort: 8000
          env:
            - name: KONG_DATABASE
              value: "off"
            - name: KONG_DECLARATIVE_CONFIG
              value: /kong/declarative/kong.yml
          volumeMounts:
            - mountPath: /kong/declarative/
              name: config
      volumes:
        - name: config
          configMap:
            name: kong
---
apiVersion: v1
kind: Service
metadata:
  name: kong
spec:
  ports:
    - port: 8000
      nodePort: 31800
  selector:
    app: kong
  type: NodePort
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: kong
data:
  kong.yml: |

    _format_version: "3.0"
    _transform: true
    services:
    - name: mqtt-logger
      url: http://mqtt-logger
      routes:
      - name: mqtt-logger
        paths:
        - /mqttlogger

In this example, Kong is configured to proxy requests on /mqttlogger to mqtt-logger, another service deployed in Kubernetes. As such, from Kong, this service can be resolved by CoreDNS at http://mqtt-logger.

However, in this configuration, one might experience some unusually high latency when requesting one of the proxied back-end service:

*   Trying 127.0.0.1:31800...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 31800 (#0)
> GET /mqttlogger HTTP/1.1
> Host: localhost:31800
> User-Agent: curl/7.68.0
> Accept: */*
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Content-Type: application/json; charset=utf-8
< Content-Length: 327
< Connection: keep-alive
< X-Powered-By: Express
< Access-Control-Allow-Origin: *
< ETag: W/"147-5qVhQtCrfyTSEc7nwcETprlwdvI"
< Date: Mon, 04 Dec 2023 10:56:05 GMT
< X-Kong-Upstream-Latency: 2
< X-Kong-Proxy-Latency: 8199
< Via: kong/3.4.0
< 
* Connection #0 to host localhost left intact

Here, one can see from the X-Kong-Proxy-Latency header that the request took more than 8s to be fulfilled.

The problem, lies in the way Kong handles DNS in Kubernetes. As explained in the official documentation, Kong does not resolve services by CNAME or A records in priority by default. However, this can be configured using environment variables, by adding the following variable under env section of the manifest:

- name: KONG_DNS_ORDER
  value: A,CNAME,LAST,SRV

With the DNS order tweak as such, Kong should now process requests with little to no latency.