-
Notifications
You must be signed in to change notification settings - Fork 103
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Daemonset crashloopback in openshift #404
Comments
Hey, this was changed in #269, so we can remove access to the Hetzner Cloud API from the daemon set. We would prefer to keep the daemon set ("node" binary) as small as possible, so adding back access to the API is not what we want. @samcday Do you have an idea how we can solve this for OpenShift where access to the metadata service is blocked? |
Oh, forgot to mention. The Server ID and Location, which are the two fields retrieved from the Metadata Service are used in the response to Lines 194 to 205 in cbb7750
|
Hm. Tricky one. My original hope was to use k8s Node metadata as source of truth for this, thus tying csi-driver to hccm. But of course that violates the CSI abstraction and won't work for other container orchestrators. Ultimately, the only way for us to determine this information from a particular node, without assuming any access to a control plane / orchestrator API of any kind, means we can only fetch this information from the metadata service, or fallback to statically provided information. ... Or we just add back the |
One other somewhat hacky idea: we could do the metadata API lookup in a small initContainer that uses |
Perhaps this is something that can be done only for Openshift through the Helm Chart? |
Yes, that sounds good 👍 Or even more generally: just a thing that you can opt into through That said, it might just be better to always do it that way and keep the number of different deployment modes to a minimum. With such an approach, the node binary could remove all notion of HC API or metadata service, and require that all necessary metadata/topology info is injected through env. Some of this env comes from downward API, the rest comes from this proposed init container. |
I have the same issue in Openshift. |
I solved it in csi-driver/deploy/kubernetes/hcloud-csi.yml Line 225 in dfe6183
and added hostNetwork: true in DaemonSet on line 298:csi-driver/deploy/kubernetes/hcloud-csi.yml Line 298 in dfe6183
|
This issue has been marked as stale because it has not had recent activity. The bot will close the issue if no further action occurs. |
Hello,
I have an Openshift Cluster and I try to use hetznercloud csi-drive. However, all daemonset's pods are in CrashLoopBackOff state. Here's the logs:
I guess this is related to what is described in here #143.
This issue was closed because version 1.6.0 attempts to use the environment variable HCLOUD_SERVER_ID or KUBE_NODE_NAME with a call to HCloudClient before falling back to the MetadataClient.
However v2.2.0 doesn't do that anymore, so I guess the issue is back.
Can you help me on this?
Regards,
Tarik
The text was updated successfully, but these errors were encountered: