Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(argo-cd): Add Volume Pesistence support for argocd-repo-server #1648

Draft
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

arielly-parussulo
Copy link

@arielly-parussulo arielly-parussulo commented Nov 16, 2022

  • allow argocd-repo-server to be deployed as a StatefulSet.
  • allow the creation of Persistent Volumes for argocd-repo-server.

Checklist:

  • I have bumped the chart version according to versioning
  • I have updated the documentation according to documentation
  • I have updated the chart changelog with all the changes that come with this pull request according to changelog.
  • Any new values are backwards compatible and/or have sensible default.
  • I have signed off all my commits as required by DCO.
  • My build is green (troubleshooting builds).

Changes are automatically published when merged to main. They are not published on branches.

Signed-off-by: arielly-parussulo <[email protected]>
Signed-off-by: arielly-parussulo <[email protected]>
Signed-off-by: arielly-parussulo <[email protected]>
@arielly-parussulo arielly-parussulo changed the title Add Volume Pesistence support for argocd-repo-serve feat(argo-cd): Add Volume Pesistence support for argocd-repo-server Nov 16, 2022
@@ -1,10 +1,17 @@
apiVersion: apps/v1
{{- if .Values.repoServer.enableStatefulSet }}
kind: StatefulSet
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm little bit worried about this implementation because repo server is stateless, not stateful and typically runs with HPA (we run 30+ replicas). This feature will break the HPA that targets Deployment + will create large amount of PVCs.

@pdrastil
Copy link
Member

Hi @arielly-parussulo thanks for contribution.

I personally believe that persistence should be more complex feature. For reason mentioned above the repo server would ideally use 1 shared PVC with ReadWriteMultiple mode so all repositories can share already downloaded resources instead of trying to create lots of PVCs that will eventually converge to the same content. I believe 1 disk per replica would work ok with NFS backend and only a few replicas.

I can imagine that this should have at least following:

  1. global.persistence with defaults for new PVCs
  2. component level overrides (mapping to volumes)
  3. option to choose between PVC (created, existing), in-memory emptyDir and ephemeral emptyDir
  4. must be compatible with current HPAs

I think this might be good base but feature should be extended.

@arielly-parussulo
Copy link
Author

Hi @arielly-parussulo thanks for contribution.

I personally believe that persistence should be more complex feature. For reason mentioned above the repo server would ideally use 1 shared PVC with ReadWriteMultiple mode so all repositories can share already downloaded resources instead of trying to create lots of PVCs that will eventually converge to the same content. I believe 1 disk per replica would work ok with NFS backend and only a few replicas.

I can imagine that this should have at least following:

  1. global.persistence with defaults for new PVCs
  2. component level overrides (mapping to volumes)
  3. option to choose between PVC (created, existing), in-memory emptyDir and ephemeral emptyDir
  4. must be compatible with current HPAs

I think this might be good base but feature should be extended.

Cool! I think I can add some of these features in this PR. I've added the StatefulSet feature because I saw some people mentioning that in this issue and we ended up using it in my company as we have less replicas and a monorepo that ends up causing a DiskPressure issue in our pods. So I still think that it could be an option in the Helm Chart.
But I agree with you about the other features. I will try to improve this PR to add more persistency features for argocd-repo-server.

@pdrastil
Copy link
Member

Please also go though this thread argoproj/argo-cd#7927 as this chart mirrors what's in the upstream.

@mkilchhofer mkilchhofer marked this pull request as draft December 1, 2022 07:15
@mkilchhofer mkilchhofer added the awaiting-upstream Is waiting for a change upstream to be completed before it can be merged. label Dec 1, 2022
@Gianluca755
Copy link

Sorry for asking @pdrastil, is it possible to have feature even if it's still in beta? @arielly-parussulo haven't pushed in 3 weeks. I can contribute if needed. Thanks

@github-actions
Copy link

This pull request has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@pierluigilenoci
Copy link

Any news about this?

@mkilchhofer mkilchhofer linked an issue Feb 14, 2023 that may be closed by this pull request
@argoproj argoproj deleted a comment from github-actions bot Apr 14, 2023
@mkilchhofer mkilchhofer added on-hold Issues or Pull Requests with this label will never be considered stale and removed no-pr-activity labels Apr 14, 2023
@zswanson
Copy link
Contributor

For reason mentioned above the repo server would ideally use 1 shared PVC with ReadWriteMultiple mode

Just to note, ReadWriteMany isn't well supported by cloud vendors when pods are on different nodes. Neither GKE nor EKS allow ReadWriteMany when you're using PD or EBS as the volume type.

@pierluigilenoci
Copy link

@pdrastil using ReadWriteMany necessarily imposes the use of some NFS (for example, EFS from AWS) with a significant reduction in performance (for example, in Azure, SMB is used, which has embarrassing performance) as well as essential limitations in cases of clusters that operate multi-AZ.
One day, it will be a concrete option, but today it is only a good theoretical idea that collides with the limits of the various cloud providers.

@clement94310
Copy link

hello
we also need a pvc on our repo server does someone know if this PR will be merge soon ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
argo-cd awaiting-upstream Is waiting for a change upstream to be completed before it can be merged. on-hold Issues or Pull Requests with this label will never be considered stale size/M
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Argo CD: PersistentVolumeClaims for volumes
7 participants