This is a Container Storage Interface driver for Hetzner Cloud enabling you to use ReadWriteOnce Volumes within Kubernetes. Please note that this driver requires Kubernetes 1.19 or newer.
-
Create a read+write API token in the Hetzner Cloud Console.
-
Create a secret containing the token:
# secret.yml apiVersion: v1 kind: Secret metadata: name: hcloud namespace: kube-system stringData: token: YOURTOKEN
and apply it:
kubectl apply -f <secret.yml>
-
Deploy the CSI driver and wait until everything is up and running:
Have a look at our Version Matrix to pick the correct deployment file.
kubectl apply -f https://raw.githubusercontent.com/hetznercloud/csi-driver/v2.2.0/deploy/kubernetes/hcloud-csi.yml
-
To verify everything is working, create a persistent volume claim and a pod which uses that volume:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: hcloud-volumes --- kind: Pod apiVersion: v1 metadata: name: my-csi-app spec: containers: - name: my-frontend image: busybox volumeMounts: - mountPath: "/data" name: my-csi-volume command: [ "sleep", "1000000" ] volumes: - name: my-csi-volume persistentVolumeClaim: claimName: csi-pvc
Once the pod is ready, exec a shell and check that your volume is mounted at
/data
.kubectl exec -it my-csi-app -- /bin/sh
-
To add encryption with LUKS you have to create a dedicate secret containing an encryption passphrase and duplicate the default
hcloud-volumes
storage class with added parameters referencing this secret:apiVersion: v1 kind: Secret metadata: name: encryption-secret namespace: kube-system stringData: encryption-passphrase: foobar --- apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hcloud-volumes-encrypted provisioner: csi.hetzner.cloud reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true parameters: csi.storage.k8s.io/node-publish-secret-name: encryption-secret csi.storage.k8s.io/node-publish-secret-namespace: kube-system
Your nodes might need to have cryptsetup
installed to mount the volumes with LUKS.
To upgrade the csi-driver version, you just need to apply the new manifests to your cluster.
In case of a new major version, there might be manual steps that you need to follow to upgrade the csi-driver. See the following section for a list of major updates and their required steps.
There are three breaking changes between v1.6 and v2.0 that require user intervention. Please take care to follow these steps, as otherwise the update might fail.
Before the rollout:
-
The secret containing the API token was renamed from
hcloud-csi
tohcloud
. This change was made so both the cloud-controller-manager and the csi-driver can use the same secret. Check that you have a secrethcloud
in the namespacekube-system
, and that the secret contains the API token, as described in the section Getting Started:$ kubectl get secret -n kube-system hcloud
-
We added a new field to our
CSIDriver
resource to support CSI volume fsGroup policy management. This change requires a replacement of theCSIDriver
object. You need to manually delete the old object:$ kubectl delete csidriver csi.hetzner.cloud
The new
CSIDriver
will be installed when you apply the new manifests. -
Stop the old pods to make sure that only everything is replaced in order and no incompatible pods are running side-by-side:
$ kubectl delete statefulset -n kube-system hcloud-csi-controller $ kubectl delete daemonset -n kube-system hcloud-csi-node
-
We changed the way the device path of mounted volumes is communicated to the node service. This requires changes to the
VolumeAttachment
objects, where we need to add information to thestatus.attachmentMetadata
field. Execute the linked script to automatically add the required information. This requireskubectl
versionv1.24+
, even if your cluster is running v1.23.$ kubectl version $ curl https://raw.githubusercontent.com/hetznercloud/csi-driver/main/docs/v2-fix-volumeattachments/fix-volumeattachments.sh ./fix-volumeattachments.sh $ chmod +x ./fix-volumeattachments.sh $ ./fix-volumeattachments.sh
Rollout the new manifest:
$ kubectl apply -f https://raw.githubusercontent.com/hetznercloud/csi-driver/v2.2.0/deploy/kubernetes/hcloud-csi.yml
After the rollout:
-
Delete the now unused secret
hcloud-csi
in the namespacekube-system
:$ kubectl delete secret -n kube-system hcloud-csi
-
Remove old resources that have been replaced:
$ kubectl delete clusterrolebinding hcloud-csi $ kubectl delete clusterrole hcloud-csi $ kubectl delete serviceaccount -n kube-system hcloud-csi
Root servers can be part of the cluster, but the CSI plugin doesn't work there. Taint the root server as follows to skip that node for the daemonset.
kubectl label nodes <node name> instance.hetzner.cloud/is-root-server=true
We aim to support the latest three versions of Kubernetes. After a new Kubernetes version has been released we will stop supporting the oldest previously supported version. This does not necessarily mean that the CSI driver does not still work with this version. However, it means that we do not test that version anymore. Additionally, we will not fix bugs related only to an unsupported version.
Requirements: Docker
The core operations like publishing and resizing can be tested locally with Docker.
go test $(go list ./... | grep integrationtests) -v
The Hetzner Cloud CSI Driver was tested against the official k8s e2e tests for a specific version. You can run the tests with the following commands. Keep in mind, that these tests run on real cloud servers and will create volumes that will be billed.
Test Server Setup:
1x CPX21 (Ubuntu 18.04)
Requirements: Docker and Go 1.17
- Configure your environment correctly
export HCLOUD_TOKEN=<specifiy a project token> export K8S_VERSION=1.21.0 # The specific (latest) version is needed here export USE_SSH_KEYS=key1,key2 # Name or IDs of your SSH Keys within the Hetzner Cloud, the servers will be accessable with that keys
- Run the tests
go test $(go list ./... | grep e2etests) -v -timeout 60m
The tests will now run, this will take a while (~30 min).
If the tests fail, make sure to clean up the project with the Hetzner Cloud Console or the hcloud cli.
This repository provides skaffold to easily deploy / debug this driver on demand
- Install hcloud-cli
- Install k3sup
- Install cilium
- Install docker
You will also need to set a HCLOUD_TOKEN
in your shell session
- Create an SSH key
Assuming you already have created an ssh key via ssh-keygen
hcloud ssh-key create --name ssh-key-csi-test --public-key-from-file ~/.ssh/id_rsa.pub
- Create a server
hcloud server create --name csi-test-server --image ubuntu-20.04 --ssh-key ssh-key-csi-test --type cx11
- Setup k3s on this server
k3sup install --ip $(hcloud server ip csi-test-server) --local-path=/tmp/kubeconfig --cluster --k3s-channel=v1.23 --k3s-extra-args='--no-flannel --no-deploy=servicelb --no-deploy=traefik --disable-cloud-controller --disable-network-policy --kubelet-arg=cloud-provider=external'
- The kubeconfig will be created under
/tmp/kubeconfig
- Kubernetes version can be configured via
--k3s-channel
- Switch your kubeconfig to the test cluster
export KUBECONFIG=/tmp/kubeconfig
- Install cilium + test your cluster
cilium install
- Add your secret to the cluster
kubectl -n kube-system create secret generic hcloud --from-literal="token=$HCLOUD_TOKEN"
- Install hcloud-cloud-controller-manager + test your cluster
kubectl apply -f https://github.com/hetznercloud/hcloud-cloud-controller-manager/releases/latest/download/ccm.yaml
kubectl config set-context default
kubectl get node -o wide
- Deploy your CSI driver
SKAFFOLD_DEFAULT_REPO=naokiii skaffold dev
docker login
required- Skaffold is using your own dockerhub repo to push the CSI image.
On code change, skaffold will repack the image & deploy it to your test cluster again. Also, it is printing all logs from csi components.
MIT license