Deploying Mosquitto MQTT to k3s with Traefik TLS ingress

In memory of our beagle Raphael, who passed earlier this year
Apple iPhone 8, 31 Mar 2023

This article describes how I deployed mosquitto MQTT with TLS into my k3s cluster. I used an article by Maxime Moreillon as my starting point but needed to adjust the ingress to work with the Traefik ingress controller installed by k3s.

Context and dependencies

My cluster is a default k3s install with cert-manager configured to obtain certificates from Let’s Encrypt for my personal domain. There are numerous articles on how to configure this already out there, for example https://k3s.rocks/https-cert-manager-letsencrypt/ or https://opensource.com/article/20/3/ssl-letsencrypt-k3s.

The instructions below assume you already have a k3s cluster with cert-manager installed, your own domain name and a local DNS server. It also assumes some knowledge of DNS and DNS entries.

Adding MQTT and ACME endpoints to Traefik

I had some additional configuration to make this work nicely on my home network.

Since MQTT is not an HTTP service, we need to define Traefik ingress endpoints for TCP connections. I also defined an explicit ACME endpoint for cert-manager to provide ACME http01 validations: this can alternatively be done through the default Traefik web endpoint, but I need to expose it externally and don’t want to expose my internal, unsecured HTTP endpoints. I use port forwarding on my home router to send external requests on port 80 to the ACME endpoint.

I used the HelmChartConfig resource to customize the Traefik endpoints using the following resource definition:

apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
  name: traefik
  namespace: kube-system
spec:
  valuesContent: |-
    ports:
      acme:
        port: 8001
        expose: true
        exposedPort: 8001
        protocol: TCP
      mqtt:
        protocol: TCP
        port: 1833
        expose: true
      mqtts:
        protocol: TCP
        port: 8883
        expose: true
    providers:
      kubernetesCRD:
        allowEmptyServices: true
        allowExternalNameServices: true
      kubernetesIngress:
        allowEmptyServices: true
        allowExternalNameServices: true

You can save and apply this using kubectl apply -f <your filename>

Cert-manager ACME validations

The corresponding cert-manager ClusterIssuer resource that defines the http01 solver is as follows (replace drpump@example.com with a working email address on your domain):

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    email: drpump@example.com
    privateKeySecretRef:
      name: prod-issuer-account-key
    server: https://acme-v02.api.letsencrypt.org/directory
    solvers:
      - http01:
          ingress:
            class: traefik
            ingressTemplate:
              metadata:
                annotations:
                  traefik.ingress.kubernetes.io/router.entrypoints: acme
        selector: {}

I would highly recommend configuring a staging cluster issuer first and testing this (use https://acme-staging-v02.api.letsencrypt.org/directory as the server). Use a different name for the staging ClusterIssuer to minimise potential conflicts when you add a production issuer.

Again, you can save and apply this using kubectl apply -f <your filename>.

DNS and Port forwarding

For ACME http01 validation to work, you need to ensure that external HTTP requests for *.example.com on port 80 reach the ACME endpoint in your k3s cluster. I actually created a subdomain specifically for my k3s cluster (e.g. k3s.example.com) so that other traffic can be routed differently. You’ll also need to ensure that your local DNS is configured to route <host>.k3s.example.com to your k3s cluster/server for internal MQTT clients. Configuration is as follows, but adapt for your DNS provider and router.

  1. Create a CNAME record with your DNS provider that references *.k3s.example.com, that is, host is *.k3s and destination is a hostname that resolves to your home IP address (e.g. myhome.ddns.net if you use noip.com with dynamic dns).
  2. Test this by performing an external DNS lookup on a random host, e.g. nslookup whoami.k3s.example.com. It should resolve to your public IP address.
  3. Configure port forwarding in your home router to send external requests on port 80 to the ACME http01 solver on port 8001 of your k3s cluster server(s). This assumes that you’re not using port 80 for other public web pages. If you are, you might need to configure those workloads to route through the acme endpoint via Traefik or add host-based routing in front of your k3s server. Note that the http01 protocol only works on port 80.
  4. Add an A record for *.k3s.example.com to your local DNS and ensure it resolves to the private, internal address of your k3s server(s). Alternatively, add separate A records for each hostname you want to configure.
  5. Test this by performing an internal DNS lookup on a random host, e.g. nslookup whoami.k3s.example.com. It should resolve to the private, internal IP address of your k3s server(s).

Mosquitto deployment

The following subsections describe each of the resources required to deploy Mosquitto with TLS. As noted in the introduction, this is based on the article by Maxime Moreillon.

To apply the configuration below, I would suggest concatenating the resource definitions in a single file and applying with kubectl apply -f <your filename>. Remember to replace the k3s.example.com domain name with your own.

Configuration

I’ll use a ConfigMap for the mosquitto.conf file. This will be mounted onto the mosquitto k3s pod. A few things to note:

  • For now I’ve set allow_anonymous true so that we can test easily without passwords. I will add users and passwords later.
  • The key files will be mounted on the pod via secrets created when the ingress is deployed
  • A persistent volume claim is configured below to provide the data directory
  • If you modify the configuration after deploying the mosquitto server, you will need to restart the server to reload the configuration. This can be done using kubectl scale deployment mosquitto --replicas=0 then kubectl scale deployment mosquitto --replicas=1. The mosquitto server will also reload configuration on a HUP signal but I have not yet worked out how to do this conveniently in k3s.

The resource definition:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: mosquitto-config
data:
  mosquitto.conf: |

    persistence true
    persistence_location /mosquitto/data/

    log_dest stdout

    # MQTT with TLS (MQTTS)
    listener 8883
    protocol mqtt
    allow_anonymous true

    cafile /etc/ssl/certs/ca-certificates.crt
    keyfile /mosquitto/certs/tls.key
    certfile /mosquitto/certs/tls.crt

Storage

The following configures storage for mosquitto using a persistent volume claim and the default storage class. Note that I have Rancher longhorn as my default storage class, in part because I can automatically backup to my NAS drive. Perhaps not important for MQTT but good for other services.

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mosquitto-data
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi

Deployment

To deploy mosquitto we use a Deployment resource mounting the storage volume, secrets and configuration appropriately, and an associated Service resource to expose the MQTT service. Note that only the MQTT TLS service is being exposed.

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mosquitto
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mosquitto
  template:
    metadata:
      labels:
        app: mosquitto
    spec:
      containers:
      - name: mosquitto
        image: eclipse-mosquitto
        ports:
        - containerPort: 8883
        - containerPort: 9001
        volumeMounts:
        - mountPath: /mosquitto/config/mosquitto.conf
          name: config
          subPath: mosquitto.conf
        - mountPath: /mosquitto/certs/
          name: certs
        - mountPath: /mosquitto/data/
          name: data
      volumes:
      - name: config
        configMap:
          name: mosquitto-config
      - name: certs
        secret:
          secretName: mosquitto-certs
      - name: data
        persistentVolumeClaim:
          claimName: mosquitto-data

---
apiVersion: v1
kind: Service
metadata:
  name: mosquitto-mqtts
spec:
  type: ClusterIP
  selector:
    app: mosquitto
  ports:
  - port: 8883

Ingress

The ingress was the tricky part because the Traefik documentation doesn’t provide particularly good examples. I am thankful for community discussions that provided enough information and examples for me to piece together this resource definition for a Traefik IngressRouteTCP resource that receives connections on mqtt.k3s.example.com. After applying this resource, certificates will be created and stored automatically in the mosquitto-certs secret, making them available for use by your mosquitto deployment.

Note that the HostSNI matching below uses TLS HostSNI headers to route based on the hostname. Replace example.com with your domain name here. If you need an ingress route for the non-TLS endpoint, you’ll need an additional route to match all hostnames (i.e. HostSNI('*')) because there is no way to identify the target host in a regular TCP connection. I believe this forces traefik to intercept and examine all traffic, so is perhaps best avoided.

---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRouteTCP
metadata:
  name: ingressroutetcp.crd
  namespace: default

spec:
  entryPoints:
    - mqtts
  routes:
    - match: HostSNI(`mqtt.k3s.example.com`)
      services:
        - name: mosquitto-mqtts
          port: 8883
  tls:
    secretName: mosquitto-certs
    passthrough: true

Wrapping up

Use your favourite MQTT client to verify by connecting to your new server at mqtt.k3s.example.com with TLS enabled. It should connect successfully without username and password.

Written on December 24, 2023