-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
some adoptions to make newer kubestack work on AWS #1
base: master
Are you sure you want to change the base?
Conversation
…hen it is needed, but a variable needs to be defined
I've just reviewed all the changes, awesome work! I would like to discuss a few things before merging and get some info about your use-case(s). Multi-availability zone Version Discovery URL CoreOS version What do you think? |
Hi @gyulavoros! First of all thanks for the review. Please find below my comments:
Yes, that's true. But in my Use case High Availability was the most important part, thus I would like to distribute the cluster across multiple AZs. Here we give an option to the DevOps if they want the deployment across multiple AZs or only one. In case of multiple-AZs they can switch to S3 or synchronize the volumes otherwise.
Sure. I made the MR when the version 1.2.2 was the most recent one.
Well, my problem was that I had to regenerate the discovery URL or set it manually. Thus I used a shell script as a helper. In that case I can apply commands and reuse the previously set discovery URL without touching the variables script at all. This is a non-intrusive method of adding things, nobody is forced to use
Don't remember why, but I had to upgrade to a newer CoreOS version. I was not happy with the beta channel as well, but I needed a newer CoreOS. Can't exactly recover why. We can try running the script with the release version and check if everything is working fine... |
I've just found out, that Kubernetes is able to track multiple zones and handle persistent volumes automatically: http://kubernetes.io/docs/admin/multiple-zones/. However, it needs to label nodes. The description is using https://get.k8s.io with the flag Besides that, the original CoreOS guide has updated recently, I'll review the changes and try to update |
Actually, I tested the roundrobin creation and Kuberenetes automatiacally labelled the nodes. To be honest, I don't understand, why this PR is not accepted, as your master branch is pretty broken right now. At least at the point, because Kuberenetes requires each AWS aspect to have a cluster ID tag like that: https://github.com/ovanes/kubestack-aws/blob/fd8671fa9f141c8e570af3520fd07fb442e0fd5c/kube_cluster.tf#L146 Without this cluster's behaviour is unstable and broken. Took me a while to find that out and fix it. So you project as it is now is just broken. |
@@ -161,30 +233,14 @@ resource "aws_elb" "kube_master" { | |||
|
|||
# openssl | |||
provisioner "local-exec" { | |||
command = "sed 's|<MASTER_HOST>|${self.dns_name}|g' openssl/openssl.cnf.tpl > openssl/certs/openssl.cnf && cd openssl && source generate_certs.sh" | |||
command = "mkdir -p openssl/certs && sed 's|<MASTER_HOST>|${self.dns_name}|g' openssl/openssl.cnf.tpl > openssl/certs/openssl.cnf && cd openssl && source generate_certs.sh" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+100
Master breaks on me right here 💔
No description provided.