-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(argo-cd): Add Volume Pesistence support for argocd-repo-server #1648
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: arielly-parussulo <[email protected]>
Signed-off-by: arielly-parussulo <[email protected]>
Signed-off-by: arielly-parussulo <[email protected]>
Signed-off-by: arielly-parussulo <[email protected]>
92cd747
to
c4ab1c9
Compare
Signed-off-by: arielly-parussulo <[email protected]>
@@ -1,10 +1,17 @@ | |||
apiVersion: apps/v1 | |||
{{- if .Values.repoServer.enableStatefulSet }} | |||
kind: StatefulSet |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm little bit worried about this implementation because repo server is stateless, not stateful and typically runs with HPA (we run 30+ replicas). This feature will break the HPA that targets Deployment + will create large amount of PVCs.
Hi @arielly-parussulo thanks for contribution. I personally believe that persistence should be more complex feature. For reason mentioned above the repo server would ideally use 1 shared PVC with ReadWriteMultiple mode so all repositories can share already downloaded resources instead of trying to create lots of PVCs that will eventually converge to the same content. I believe 1 disk per replica would work ok with NFS backend and only a few replicas. I can imagine that this should have at least following:
I think this might be good base but feature should be extended. |
Cool! I think I can add some of these features in this PR. I've added the StatefulSet feature because I saw some people mentioning that in this issue and we ended up using it in my company as we have less replicas and a monorepo that ends up causing a DiskPressure issue in our pods. So I still think that it could be an option in the Helm Chart. |
Please also go though this thread argoproj/argo-cd#7927 as this chart mirrors what's in the upstream. |
Sorry for asking @pdrastil, is it possible to have feature even if it's still in beta? @arielly-parussulo haven't pushed in 3 weeks. I can contribute if needed. Thanks |
This pull request has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Any news about this? |
Just to note, ReadWriteMany isn't well supported by cloud vendors when pods are on different nodes. Neither GKE nor EKS allow ReadWriteMany when you're using PD or EBS as the volume type. |
@pdrastil using |
hello |
Checklist:
Changes are automatically published when merged to
main
. They are not published on branches.