Automating Let's Encrypt certificates for ingress-nginx with cert-manager

This post is a brief walk through the necessary steps to automate Let’s Encrypt certificates for nginx Ingress resources using cert-manager with its fairly new ingress-shim controller.

cert-manager in a nutshell

cert-manager is a [in development] Kubernetes add-on to automate the management and issuance of TLS certificates from various issuing sources” – and designated successor for kube-lego.

It implements three Custom Resource Definitions:

One supported issuer backend is ACME and with that Let’s Encrypt.

Installing ingress-nginx

I have applied the following manifests from master f6b8506b1733f42779ca7917a901057c03476c43 / nginx-0.11.0-21-gf6b8506b to install version 0.11.0:

nginx-ingress-controller is deployed as a DaemonSet. (If you want to run a deployment + service instead, have a look at with-rbac.yaml and the provider specific manifests.)


apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
spec:
  selector:
    matchLabels:
      app: ingress-nginx
  template:
    metadata:
      labels:
        app: ingress-nginx
      annotations:
        prometheus.io/port: '10254'
        prometheus.io/scrape: 'true'
    spec:
      hostNetwork: true
      nodeSelector:
        ingress.k8s.example.com: ""
      serviceAccountName: nginx-ingress-serviceaccount
      containers:
        - name: nginx-ingress-controller
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.11.0
          args:
            - /nginx-ingress-controller
            - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --annotations-prefix=nginx.ingress.kubernetes.io
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
          - name: http
            containerPort: 80
            hostPort: 80
          - name: https
            containerPort: 443
            hostPort: 443
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1

Given the manifest above, all nodes with label ingress.k8s.example.com run a nginx-ingress-controller pod in the host network namespace. At least one node needs to be labelled accordingly:

$ kubectl label node node-1 ingress.k8s.example.com=""

The controller and default backend should now be running and reachable:

$ kubectl -n ingress-nginx get pods
NAME                                    READY     STATUS    RESTARTS   AGE
default-http-backend-55c6c69b88-pf2d9   1/1       Running   0          1m
nginx-ingress-controller-ztvbl          0/1       Running   0          24s

$ curl --insecure https://node-1.k8s.example.com
default backend - 404

(Note: --insecure because nginx presents a self-signed certificate by default.)

Installing cert-manager

Since I don’t use helm (the primary install method for cert-manager), I have applied the following manifests from master 3a0d72c7a24e9f8560a4bca45c8c944bb1c63e8d / v0.3.0-alpha.0-17-g3a0d72c7 to install version v0.2.3 manually:

To configure ingress-shim defaults, I used a slightly modified version of deployment.yaml:


apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: cert-manager
  namespace: "cert-manager"
  labels:
    app: cert-manager
    chart: cert-manager-0.2.3
    release: cert-manager
    heritage: Tiller
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: cert-manager
        release: cert-manager
    spec:
      serviceAccountName: cert-manager
      containers:
        - name: cert-manager
          image: "quay.io/jetstack/cert-manager-controller:v0.2.3"
          imagePullPolicy: IfNotPresent
          args:
          - --cluster-resource-namespace=$(POD_NAMESPACE)
          env:
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          resources:
            requests:
              cpu: 10m
              memory: 32Mi
        - name: ingress-shim
          image: "quay.io/jetstack/cert-manager-ingress-shim:v0.2.3"
          imagePullPolicy: IfNotPresent
          args:
          - --default-issuer-name=letsencrypt-prod
          - --default-issuer-kind=ClusterIssuer
          - --default-acme-issuer-challenge-type=dns01
          - --default-acme-issuer-dns01-provider-name=route53
          resources:
            requests:
              cpu: 10m
              memory: 32Mi

Check the logs to verify cert-manager and ingress-shim have started successfully:

$ kubectl -n cert-manager logs -l app=cert-manager -c cert-manager
$ kubectl -n cert-manager logs -l app=cert-manager -c ingress-shim

Now a Let’s Encrypt ClusterIssuer can be created, for example with a route53 provider for dns01 validation (matching the default settings from above):

apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    server: https://acme-v01.api.letsencrypt.org/directory
    email: jane+letsencrypt-cert-manager@example.com
    privateKeySecretRef:
      name: letsencrypt-prod
    dns01:
      providers:
      - name: route53
        route53:
          accessKeyID: AXXX
          secretAccessKeySecretRef:
            name: route53-config
            key: secret-access-key
          region: eu-west-1
          hostedZoneID: ZXXX

In this setup, the issuer requires a route53-config secret with your AWS_SECRET_ACCESS_KEY:

apiVersion: v1
kind: Secret
metadata:
  name: route53-config
  namespace: cert-manager
type: Opaque
data:
  secret-access-key: aGVsbG8=

Ingress resources with automated TLS

By adding the annotations

an ingress resource does now trigger the cluster issuer named letsencrypt-prod to get a certificate according to the tls spec. For the echoserver example below, the certificate and key will be put into a secret named echoserver.example.com-tls.


apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: echoserver
  annotations:
    kubernetes.io/ingress.class: "nginx"
    certmanager.k8s.io/cluster-issuer: "letsencrypt-prod"
    kubernetes.io/tls-acme: "true"
spec:
  tls:
    - hosts:
      - echoserver.example.com
      secretName: echoserver.example.com-tls
  rules:
  - host: echoserver.example.com
    http:
      paths:
      - path: /
        backend:
          serviceName: echoserver
          servicePort: 80

If successful, the certificate should be available soon after:

$ kubectl get secrets echoserver.example.com-tls \
    -o jsonpath="{.data['tls\.crt']}" | \
  base64 -d | openssl x509 -text -in - | less

..and be used by nginx:

$ openssl s_client \
    -connect echoserver.example.com:443 \
    -servername echoserver.example.com \
    </dev/null >/dev/null

depth=2 O = Digital Signature Trust Co., CN = DST Root CA X3
verify return:1
depth=1 C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3
verify return:1
depth=0 CN = echoserver.example.com
verify return:1
DONE

As the certificate is signed by a generally trusted CA, --insecure is not necessary any longer:

$ curl -sS https://echoserver.example.com | grep X-Forwarded-For
X-Forwarded-For=33.44.55.66

Appendix DNS configuration

While domains can be pointed directly at nodes running ingress-nginx, an ingress domain endpoint gives more flexibility. If ingress nodes are added or lost^Wremoved, only a single record has to be updated. This could be automated.

Example:

ingress.cluster.example.com points to all nodes running the ingress-nginx DaemonSet:

$ dig +short ingress.cluster.example.com
1.2.3.4

echoserver.example.com is a CNAME for ingress.cluster.example.com:

$ dig +short echoserver.example.com
ingress.cluster.example.com
1.2.3.4