johanneskueber.com

Use Longhorn with Talos 1.10 and userVolumes

2025-06-17

When building a cluster, especially in a homelab, local storage is needed for application data. Especially for databases fast read and write is required. Offloading the workload to a NAS most of the time is slower. The solution I use is to provision on-node storage with Longhorn. Longhorn acts as a CSI and offers on-node storage, replication, backups and more.

As I am currently building a Talos cluster I need to integrate the longhorn CSI into the setup. This is not as straigt forward as with K3s oder K8s, as Talos has tighter security constraints and also needs additional plugins to handle SCSI - the underlying file system protocol used by longhorn. On top I am using Talhelper to allow a GitOps style usage of talosctl. The main advantage is the encryption of secrets used by talos config files with SOPS - something that I already use for Tofu and fluxCD.

In this article I use Talos 1.10, Longhorn 1.9 and Proxmox 8.4 as the hypervisor to provision the VMs.

Setting up a VM with Proxmox

I’m running my kubernetes cluster inside a Proxmox host. To streamline the hosting process I am using a terraform script to create the virtual machines. For Proxmox there are several providers, and the most complete one is bpg/terraform-provider-proxmox.

The talos images was derived from the Talos factory and uses the following plugins to enable longhorn prerequisites:

  • siderolabs/iscsi-tools
  • siderolabs/util-linux-tools

The tofu file creates a Talos worker VM with 2 disks. Disk one is the install disk, and disk 2 is for storing longhorn data exclusively.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
resource "proxmox_virtual_environment_vm" "talos_worker_01" {
  name        = "talos-worker-01"
  description = "Managed by Tofu"
  tags        = ["tofu"]
  node_name   = "proxmox1"
  on_boot     = true

  # additional config ommited for simplicity

  disk {
    datastore_id = "local-lvm"
    file_id      = proxmox_virtual_environment_download_file.talos_nocloud_image.id
    file_format  = "raw"
    interface    = "virtio0"
    size         = 20
  }

  disk {
    datastore_id = "local-lvm"
    interface    = "virtio1"
    size         = 1000
    iothread     = true
  }

}

Enabling iothread should give us an advantage when running database workloads in parallel on the cluster.

Configuring Talos with Talhelper

[Talhelper] is used to automate the process of secret encryption for the installation

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
# yaml-language-server: $schema=https://raw.githubusercontent.com/budimanjojo/talhelper/master/pkg/config/schemas/talconfig.json

clusterName: talos
endpoint: https://192.168.100.101:6443
domain: cluster.local

 # additional config ommited for simplicity

nodes:
  - hostname: controlplane-01
    ipAddress: 192.168.100.101
    controlPlane: true
    installDisk: /dev/vda

  - hostname: worker-01
    ipAddress: 192.168.100.111
    controlPlane: false
    installDisk: /dev/vda

# Note: Each worker is configured the same in my case
worker:
  userVolumes:
    - name: longhorn
      provisioning:
        diskSelector: 
          match: disk.dev_path == '/dev/vdb' && !system_disk
        minSize: "900GiB" # Talos will autogrow the size if more is available
        grow: true
        
  patches:
    - "@./patch-longhorn-extramount.yaml"
  schematic:
    customization:
      systemExtensions:
        officialExtensions:
          - siderolabs/iscsi-tools # longhorn
          - siderolabs/util-linux-tools # longhorn
          - # Add whatever you need as well
  • Talos is installed on vda
  • UserVolume ’longhorn’ will be using disk vdb

UserVolumes are mounted by Talos during the setup stage of the node and are automatically placed under /var/mnt/{name}. Hence our longhorn mountpoint will be /usr/mnt/longhorn and will point to /dev/vdb. Now we have to allow the kubelet to access the mountpoint inside the containers:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
machine:
  kubelet:
    extraMounts:
      - destination: /var/mnt/longhorn
        type: bind
        source: /var/mnt/longhorn
        options:
          - bind
          - rshared
          - rw

This patch mounts the respective volumen provisioned by talos to be accessible by the kubelet. Hence longhorn is able to access the mounted volume that is configured. Now the only thing thats left is configuring lonhgorn.

Configuring Longhorn

In my cluster, longhorn is automatically installed by fluxCD. The longhorn install is covered by the helm controller, hence this install can also be perfromed manually using helm CLI.

1
2
3
4
5
6
defaultSettings:
  defaultReplicaCount: 1
  storageReservedPercentageForDefaultDisk: 1 # longhorn has its own disk - it can use all available space
  defaultDataPath: "/var/mnt/longhorn" # the mount point previously defined via TalHelper

 # additional config ommited for simplicity

If you are using longhorn on Talos, you also have to adjust the security policiy of the longhorn namepsace:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
apiVersion: v1
kind: Namespace
metadata:
  name: longhorn-system
  labels:
    pod-security.kubernetes.io/enforce: privileged
    pod-security.kubernetes.io/enforce-version: latest
    pod-security.kubernetes.io/audit: privileged
    pod-security.kubernetes.io/audit-version: latest
    pod-security.kubernetes.io/warn: privileged
    pod-security.kubernetes.io/warn-version: latest

This allows the privileged execution of the longhorn manager (to use host binds for example). Without the annotations, the installation of longhorn will fail.

Bonus: Creating an encrypted StorageClass

Now that we have storage, I want to encrypt the data at rest to increase the security of the system. The required config is done with a StorageClass resource:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: longhorn-encrypted
provisioner: driver.longhorn.io
allowVolumeExpansion: true
parameters:
  numberOfReplicas: "3"
  staleReplicaTimeout: "2880" # 48 hours in minutes
  fromBackup: ""
  encrypted: "true"
  # per volume secret which utilizes the `pvc.name` and `pvc.namespace` template parameters
  csi.storage.k8s.io/provisioner-secret-name: ${pvc.name}-longhorn
  csi.storage.k8s.io/provisioner-secret-namespace: ${pvc.namespace}
  csi.storage.k8s.io/node-publish-secret-name: ${pvc.name}-longhorn
  csi.storage.k8s.io/node-publish-secret-namespace: ${pvc.namespace}
  csi.storage.k8s.io/node-stage-secret-name: ${pvc.name}-longhorn
  csi.storage.k8s.io/node-stage-secret-namespace: ${pvc.namespace}

To create a pvc using the StorageClass, just reference it in your PVC and create a secret:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: config
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: longhorn-encrypted
  resources:
    requests:
      storage: 1Gi

In the same namespace we need a secret as the previously defined StorageClass definition. The name of the secret follows the scheme ‘{pvc.name}-longhorn’ and for our pvc with the name ‘config’ the secret looks like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
---
apiVersion: v1
kind: Secret
metadata:
    name: config-longhorn
stringData:
    CRYPTO_KEY_VALUE: # some random key
    CRYPTO_KEY_PROVIDER: secret
    CRYPTO_KEY_CIPHER: aes-xts-plain64
    CRYPTO_KEY_HASH: sha256
    CRYPTO_KEY_SIZE: "256"
    CRYPTO_PBKDF: argon2i

stat /posts/longhorn_uservolumes_talos/

2025-06-17: Initial publication of the article
2025-07-06: Add remark on longhorn namespace requirements

apropos /posts/longhorn_uservolumes_talos/