Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request for an example that uses one single Flux instance to deploy workloads on remote clusters with a kubeconfig #82

Open
principekiss opened this issue Nov 14, 2022 · 8 comments

Comments

@principekiss
Copy link

principekiss commented Nov 14, 2022

Hi,
I am trying to configure multiple K8s clusters via a single Flux instance and a single repo with the following process. Please note cluster provisioning is handled outside of Flux and without using CAPI.

I use Terraform to create a Gitlab repo, install flux on the management cluster, and sync with the repo.
Next, I clone the repo and add sync files for it to deploy workloads on the remote cluster using a KUBECONFIG secret of that cluster.

It is a bit hard to find examples and detailed documentation for that particular use case could you create one?

Thank you very much!

@kingdonb
Copy link
Member

Greetings,

Thanks for the feedback! We can add to this example repo, but FYI there is a page in the Flux docs that covers this already:

https://fluxcd.io/flux/components/kustomize/kustomization/#remote-clusters--cluster-api

It discusses Cluster-API, but there is a step which explains how to convert your kubeconfig on disk into a secret in the appropriate format. Does this doc help with solving your issue?

@stefanprodan
Copy link
Member

Does this doc help with solving your issue?

@kingdonb it doesn't, you can't use any of the example repos for remote clusters, a different structure is needed that may not be obvious until you try this example repo and all things start to fail since you can't apply HelmReleases nor any other Flux object on the remote.

@principekiss
Copy link
Author

principekiss commented Nov 17, 2022

I was able to use the example repo by doing the following:

.
├── apps
│   ├── base
│   │   └── podinfo
│   │       ├── kustomization.yaml
│   │       ├── namespace.yaml
│   │       └── release.yaml
│   └── staging
│       ├── kustomization.yaml
│       └── podinfo-values.yaml
├── clusters
│   ├── management
│   │   ├── flux-system
│   │   │   ├── gotk-components.yaml
│   │   │   ├── gotk-sync.yaml
│   │   │   └── kustomization.yaml
│   │   ├── production-sync.yaml          # point to ./clusters/staging and create a source  for production
│   │   └── staging-sync.yaml             # point to ./clusters/production and create a source  for staging
│   └── remote
│       ├── production
│       │   ├── flux-system
│       │   │   └── gotk-components.yaml   # only install Flux
│       │   └── infrastructure.yaml        # contains the KubeConfig and points to ./infrastructure
│       └── staging
│           ├── flux-system
│           │   └── gotk-components.yaml   # only install Flux
│           └── infrastructure.yaml        # contains the KubeConfig and points to ./infrastructure
├── infrastructure   # unchanged, just removed Nginx│   ├── kustomization.yaml
│   ├── redis
│   │   ├── kustomization.yaml
│   │   ├── kustomizeconfig.yaml
│   │   ├── namespace.yaml
│   │   ├── release.yaml
│   │   └── values.yaml
│   └── sources
│       ├── bitnami.yaml
│       ├── kustomization.yaml
│       └── podinfo.yaml
└── README.md

clusters/management/staging-sync.yaml

---
apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: GitRepository
metadata:
  name: staging
  namespace: flux-system
spec:
  interval: 1m0s
  ref:
    branch: master
  secretRef:
    name: flux-system
  url: <GIT-URL>
---
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
  name: staging
  namespace: flux-system
spec:
  interval: 10m0s
  timeout: 2m10s
  path: ./remote/clusters/staging
  prune: true
  sourceRef:
    kind: GitRepository
    name: staging

clusters/remote/staging/infrastructure.yaml

---
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
  name: infrastructure
  namespace: flux-system
spec:
  interval: 2m0s
  path: ./infrastructure
  prune: true
  validation: client
  sourceRef:
    kind: GitRepository
    name: staging
  kubeConfig:
    secretRef:
      name: kubeconfig-staging

clusters/remote/staging/apps.yaml

apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
  name: apps
  namespace: flux-system
spec:
  interval: 2m0s
  dependsOn:
    - name: infrastructure
  path: ./apps/staging
  prune: true
  validation: client
  sourceRef:
    kind: GitRepository
    name: staging
  healthChecks:
    - apiVersion: helm.toolkit.fluxcd.io/v1beta1
      kind: HelmRelease
      name: podinfo
      namespace: podinfo
  kubeConfig:
    secretRef:
      name: kubeconfig-staging

@principekiss
Copy link
Author

principekiss commented Dec 13, 2022

Does this doc help with solving your issue?

@kingdonb it doesn't, you can't use any of the example repos for remote clusters, a different structure is needed that may not be obvious until you try this example repo and all things start to fail since you can't apply HelmReleases nor any other Flux object on the remote.

Is there a way to install all sources inside the management cluster and refer to it for deploying the helm charts on the remote clusters ?

├── apps
│   ├── base
│   │   ├── infra-tools
│   │   │   ├── deployment.yaml
│   │   │   ├── kafka-user.yaml
│   │   │   ├── kustomization.yaml
│   │   │   ├── pvc.yaml
│   │   │   └── service.yaml
│   │   ├── kustomization.yaml
│   │   ├── podinfo
│   │   │   ├── kustomization.yaml
│   │   │   ├── namespace.yaml
│   │   │   ├── release.yaml
│   │   │   └── wildcard-cert.yaml
│   │   └── uptime-kuma
│   │       ├── kustomization.yaml
│   │       ├── monitoring.yaml
│   │       └── release.yaml
│   ├── production
│   │   ├── kustomization.yaml
│   │   └── patches
│   │       └── podinfo-values.yaml
│   └── staging
│       ├── kustomization.yaml
│       └── patches
│           └── podinfo-values.yaml
├── clusters
│   ├── management
│   │   ├── flux-system
│   │   │   ├── gotk-components.yaml
│   │   │   ├── gotk-sync.yaml
│   │   │   └── kustomization.yaml
│   │   ├── infrastructure.yaml
│   │   └── staging-sync.yaml
│   ├── production
│   │   ├── apps.yaml
│   │   ├── flux-system
│   │   │   └── gotk-components.yaml
│   │   └── infrastructure.yaml
│   └── staging
│       ├── apps.yaml
│       ├── flux-system
│       │   └── gotk-components.yaml
│       └── infrastructure.yaml
├── infrastructure
│   ├── base
│   │   ├── calico
│   │   │   ├── kustomization.yaml
│   │   │   ├── kustomizeconfig.yaml
│   │   │   ├── namespace.yaml
│   │   │   ├── release.yaml
│   │   │   └── values.yaml
│   │   ├── cert-manager
│   │   │   ├── clusterissuers.yaml
│   │   │   ├── crds.yaml
│   │   │   ├── kustomization.yaml
│   │   │   ├── namespace.yaml
│   │   │   ├── release.yaml
│   │   │   └── wildcard.yaml
│   │   ├── external-dns
│   │   │   ├── deployment.yaml
│   │   │   ├── kustomization.yaml
│   │   │   ├── namespace.yaml
│   │   │   └── security.yaml
│   │   ├── kustomization.yaml
│   │   ├── kyverno
│   │   │   ├── crds.yaml
│   │   │   ├── kustomization.yaml
│   │   │   ├── kustomizeconfig.yaml
│   │   │   ├── namespace.yaml
│   │   │   ├── release.yaml
│   │   │   └── values.yaml
│   │   ├── monitoring
│   │   │   ├── kustomization.yaml
│   │   │   ├── kustomizeconfig.yaml
│   │   │   ├── namespace.yaml
│   │   │   ├── release.yaml
│   │   │   ├── thanos
│   │   │   │   ├── kustomization.yaml
│   │   │   │   ├── release.yaml
│   │   │   │   └── values.yaml
│   │   │   ├── thanos-objstore-config.yaml
│   │   │   ├── values.yaml
│   │   │   └── wildcard-cert.yaml
│   │   ├── redis
│   │   │   ├── kustomization.yaml
│   │   │   ├── kustomizeconfig.yaml
│   │   │   ├── namespace.yaml
│   │   │   ├── release.yaml
│   │   │   └── values.yaml
│   │   ├── reflector
│   │   │   ├── kustomization.yaml
│   │   │   └── release.yaml
│   │   ├── sealed-secrets
│   │   │   ├── crds.yaml
│   │   │   ├── kustomization.yaml
│   │   │   ├── kustomizeconfig.yaml
│   │   │   ├── release.yaml
│   │   │   └── values.yaml
│   │   ├── sources
│   │   │   ├── bitnami-full-index.yaml
│   │   │   ├── bitnami.yaml
│   │   │   ├── calico.yaml
│   │   │   ├── emberstack.yaml
│   │   │   ├── jetstack.yaml
│   │   │   ├── kustomization.yaml
│   │   │   ├── kyverno.yaml
│   │   │   ├── podinfo.yaml
│   │   │   ├── prometheus.yaml
│   │   │   ├── sealedsecrets.yaml
│   │   │   └── uptime-kuma.yaml
│   │   ├── stackgres
│   │   │   ├── kustomization.yaml
│   │   │   ├── namespace.yaml
│   │   │   └── release.yaml
│   │   ├── storage
│   │   │   ├── file-nfs.yaml
│   │   │   └── kustomization.yaml
│   │   └── strimzi
│   │       ├── crds.yaml
│   │       ├── kustomization.yaml
│   │       ├── namespace.yaml
│   │       └── release.yaml
│   ├── management
│   │   ├── kustomization.yaml
│   │   └── patches
│   │       ├── certmanager-kustomization.yaml
│   │       ├── monitoring-wildcard-cert.yaml
│   │       ├── prometheus-values.yaml
│   │       └── thanos-objectstore-config.yaml
│   ├── production
│   │   ├── kustomization.yaml
│   │   └── patches
│   │       ├── monitoring-wildcard-cert.yaml
│   │       ├── prometheus-values.yaml
│   │       └── thanos-objectstore-config.yaml
│   └── staging
│       ├── kustomization.yaml
│       └── patches
│           ├── monitoring-wildcard-cert.yaml
│           ├── prometheus-values.yaml
│           └── thanos-objectstore-config.yaml
└── scripts
    └── validate.sh

At the moment it installs sources on all clusters it deploys workloads to and only the infrastructure.yaml and apps.yaml sync files use a kubeconfig secret to target remote clusters.

@principekiss
Copy link
Author

principekiss commented Dec 13, 2022

Also, is there a way to patch a customization ?

From ./infrastructure/management/kustomization to patch the ./infrastructure/base/cert-manager/kustomization in order to patch the list and only deploy 2 resources ?

ATM I tried the following:

# ./infrastructure/staging/patches/cert-manager-kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: cert-manager
metadata:
  name: cert-manager
resources:
  - clusterissuers.yaml
  - wildcard.yaml
# ./infarstructure/staging/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
  - ../base/sources
  - ../base/external-dns
  - ../base/reflector
  - ../base/cert-manager
  - ../base/monitoring
patchesStrategicMerge:
  - patches/certmanager-kustomization.yaml
  - patches/monitoring-wildcard-cert.yaml
  - patches/thanos-objectstore-config.yaml
  - patches/prometheus-values.yaml

But I always get errors from Kustomization controller when I omit the metadata.name in the customization for cert-manager:

kustomize build failed: trouble configuring builtin PatchStrategicMergeTransformer with config: ` paths: - patches/certmanager-kustomization.yaml - - patches/monitoring-wildcard-cert.yaml - patches/thanos-objectstore-config.yaml - patches/prometheus-values.yaml `: missing metadata.name in object {{kustomize.config.k8s.io/v1beta1 Kustomization} {{ } map[] map[]}}

I would also do that for sources in order to patch the customization of sources and only deploy few ones on management cluster.

@kingdonb
Copy link
Member

kingdonb commented Dec 15, 2022

@tuxicorn I'm very confused by your examples. It might help to explain that SIG-CLI's Kustomization "resource" is not a resource, it does not get applied to Kubernetes and it does not live in etcd, so it is not eligible for patching (and why it is not addressable by name.) You cannot add metadata.name to this type of Kustomization, it is not named nor a resource.

There are some other issues: bases – I was told this keyword is deprecated, or at least you don't need it anymore, you can refer to other bases as "resources"

When you build overlays on other overlays, you can "patch in" additional resources that weren't there in the parent base by simply adding them to the list of resources: – in infrastructure/staging/patches/cert-manager-kustomization.yaml try including the resources directly there, the content of clusterissuers.yaml and wildcard.yaml, or name those files directly as resources in the kustomization.yaml where you had them listed as patches.

If this advice doesn't jibe with your repository structure, maybe there is something we can do, but it's going to be more difficult for someone following this issue to put together your structure and understand it, than if you were able to show this issue in a public repo where we can all reproduce the error and see what goes wrong. Does any of this information help?

@principekiss
Copy link
Author

principekiss commented Dec 15, 2022

Thank you for your answer @kingdonb.
For the bases keyword, I changed it to the resources one.

I resolved the cert-manager part by simply not including it as in the end, I don't need it on the management cluster because it is already installed.

But for the /+infrastructure/base/sources, I have my staging and production clusters that need all charts and my management cluster that need only a few of them, and I would like to find a way to not include all sources for the management cluster:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: flux-system
metadata:
  name: sources
resources:
  - jetstack.yaml
  - prometheus.yaml
  - emberstack.yaml
  - sealedsecrets.yaml
  - bitnami-full-index.yaml

Here is a similar repo-structure where I had to omit most workloads: https://github.com/tuxicorn/flux-remote-cluster

The source is still there though.

I would really appreciate some feedback to improve it and spot things and if possible, install sources on the management cluster only to deploy helm charts on remote clusters from it.

@suseendare
Copy link

Hello Flux Community team,

An example here for a remote cluster deployment would be very helpful. Kindly help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants