You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
nfs-client
22:39:01 CST [AddonModule] Install addon nfs-client
Release "nfs-client" does not exist. Installing it now.
22:40:36 CST message: [LocalHost]
looks like "https://charts.kubesphere.io/main" is not a valid chart repository or cannot be reached: Get "https://charts.kubesphere.io/main/index.yaml": read tcp 10.10.10.201:47604->104.21.80.188:443: read: connection reset by peer
22:40:36 CST failed: [LocalHost]
22:40:36 CST message: [LocalHost]
Pipeline[InstallAddonsPipeline] execute failed: Module[AddonModule] exec failed:
failed: [LocalHost] [InstallAddon] exec failed after 1 retries: looks like "https://charts.kubesphere.io/main" is not a valid chart repository or cannot be reached: Get "https://charts.kubesphere.io/main/index.yaml": read tcp 10.10.10.201:47604->104.21.80.188:443: read: connection reset by peer
22:40:36 CST failed: [LocalHost]
error: Pipeline[CreateClusterPipeline] execute failed: Module[AddonsModule] exec failed:
failed: [LocalHost] [InstallAddons] exec failed after 1 retries: Pipeline[InstallAddonsPipeline] execute failed: Module[AddonModule] exec failed:
failed: [LocalHost] [InstallAddon] exec failed after 1 retries: looks like "https://charts.kubesphere.io/main" is not a valid chart repository or cannot be reached: Get "https://charts.kubesphere.io/main/index.yaml": read tcp 10.10.10.201:47604->104.21.80.188:443: read: connection reset by peer
What is version of KubeKey has the issue?
kk version: &version.Info{Major:"3", Minor:"1", GitVersion:"v3.1.7", GitCommit:"da475c670813fc8a4dd3b1312aaa36e96ff01a1f", GitTreeState:"clean", BuildDate:"2024-10-30T09:41:20Z", GoVersion:"go1.19.2", Compiler:"gc", Platform:"linux/amd64"}
What is your os environment?
阿里龙蜥8.9
KubeKey config file
A clear and concise description of what happend.
nfs-client
22:39:01 CST [AddonModule] Install addon nfs-client
Release "nfs-client" does not exist. Installing it now.
22:40:36 CST message: [LocalHost]
looks like "https://charts.kubesphere.io/main" is not a valid chart repository or cannot be reached: Get "https://charts.kubesphere.io/main/index.yaml": read tcp 10.10.10.201:47604->104.21.80.188:443: read: connection reset by peer
22:40:36 CST failed: [LocalHost]
22:40:36 CST message: [LocalHost]
Pipeline[InstallAddonsPipeline] execute failed: Module[AddonModule] exec failed:
failed: [LocalHost] [InstallAddon] exec failed after 1 retries: looks like "https://charts.kubesphere.io/main" is not a valid chart repository or cannot be reached: Get "https://charts.kubesphere.io/main/index.yaml": read tcp 10.10.10.201:47604->104.21.80.188:443: read: connection reset by peer
22:40:36 CST failed: [LocalHost]
error: Pipeline[CreateClusterPipeline] execute failed: Module[AddonsModule] exec failed:
failed: [LocalHost] [InstallAddons] exec failed after 1 retries: Pipeline[InstallAddonsPipeline] execute failed: Module[AddonModule] exec failed:
failed: [LocalHost] [InstallAddon] exec failed after 1 retries: looks like "https://charts.kubesphere.io/main" is not a valid chart repository or cannot be reached: Get "https://charts.kubesphere.io/main/index.yaml": read tcp 10.10.10.201:47604->104.21.80.188:443: read: connection reset by peer
Relevant log output
No response
Additional information
nfs配置文件如下:
创建了一个存储类
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-storage
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
archiveOnDelete: "true" ## 删除pv的时候,pv的内容是否要备份
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
replace with namespace where provisioner is deployed
namespace: default
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: registry.cn-beijing.aliyuncs.com/kubesphereio/nfs-subdir-external-provisioner:v4.0.2
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: k8s-sigs.io/nfs-subdir-external-provisioner
- name: NFS_SERVER
## 指定自己nfs服务器地址
value: 10.10.10.100
- name: NFS_PATH
## nfs服务器共享的目录
value: /home/k8s-nfs
volumes:
- name: nfs-client-root
nfs:
server: 10.10.10.100
path: /home/k8s-nfs
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
replace with namespace where provisioner is deployed
namespace: default
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
resources: ["nodes"]
verbs: ["get", "list", "watch"]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
resources: ["events"]
verbs: ["create", "update", "patch"]
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
name: nfs-client-provisioner
replace with namespace where provisioner is deployed
namespace: defaultroleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
replace with namespace where provisioner is deployed
namespace: default
rules:
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
replace with namespace where provisioner is deployed
namespace: default
subjects:
name: nfs-client-provisioner
replace with namespace where provisioner is deployed
namespace: defaultroleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
The text was updated successfully, but these errors were encountered: