Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

some adoptions to make newer kubestack work on AWS #1

Open
wants to merge 10 commits into
base: master
Choose a base branch
from

Conversation

ovanes
Copy link

@ovanes ovanes commented May 2, 2016

No description provided.

@gyulavoros
Copy link
Member

I've just reviewed all the changes, awesome work! I would like to discuss a few things before merging and get some info about your use-case(s).

Multi-availability zone
I have a concern about a multi-availability zone setup, as far as I know Kubernetes/AWS doesn't support persistent volumes across availability zones (we use PVs a lot). You have to create AWS Volumes in specific availability zones and as Kubernetes would move pods between worker instances potentially across AZs we would get errors indicating PVs are failed to mount.

Version
Currently v1.2.4 is the latest Kubernetes release, we should use that as default.

Discovery URL
I do not completely understand the usage of the tf command and the motivation behind it. I would like to keep things as simple as possible and to reduce bash scripts to a minimum. Can you give me more detail about it?

CoreOS version
I would prefer to stick with CoreOS Stable channel unless we have an absolutely necessary reason not to. Latest CoreOS stable version is 899.17.0, we should update all the AMIs accordingly (eu-central-1 and others).

What do you think?

@ovanes
Copy link
Author

ovanes commented May 9, 2016

Hi @gyulavoros!

First of all thanks for the review. Please find below my comments:

Multi-availability zone
I have a concern about a multi-availability zone setup, as far as I know Kubernetes/AWS doesn't support persistent volumes across availability zones (we use PVs a lot). You have to create AWS Volumes in specific availability zones and as Kubernetes would move pods between worker instances potentially across AZs we would get errors indicating PVs are failed to mount.

Yes, that's true. But in my Use case High Availability was the most important part, thus I would like to distribute the cluster across multiple AZs. Here we give an option to the DevOps if they want the deployment across multiple AZs or only one. In case of multiple-AZs they can switch to S3 or synchronize the volumes otherwise.

Version
Currently v1.2.4 is the latest Kubernetes release, we should use that as default.

Sure. I made the MR when the version 1.2.2 was the most recent one.

Discovery URL
I do not completely understand the usage of the tf command and the motivation behind it. I would like to keep things as simple as possible and to reduce bash scripts to a minimum. Can you give me more detail about it?

Well, my problem was that I had to regenerate the discovery URL or set it manually. Thus I used a shell script as a helper. In that case I can apply commands and reuse the previously set discovery URL without touching the variables script at all. This is a non-intrusive method of adding things, nobody is forced to use tfscript they can rely on terraform commands directly.

CoreOS version
I would prefer to stick with CoreOS Stable channel unless we have an absolutely necessary reason not to. Latest CoreOS stable version is 899.17.0, we should update all the AMIs accordingly (eu-central-1 and others).

Don't remember why, but I had to upgrade to a newer CoreOS version. I was not happy with the beta channel as well, but I needed a newer CoreOS. Can't exactly recover why. We can try running the script with the release version and check if everything is working fine...

@gyulavoros
Copy link
Member

I've just found out, that Kubernetes is able to track multiple zones and handle persistent volumes automatically: http://kubernetes.io/docs/admin/multiple-zones/. However, it needs to label nodes. The description is using https://get.k8s.io with the flag MULTIZONE=1, I'll look into it how it works internally.

Besides that, the original CoreOS guide has updated recently, I'll review the changes and try to update kubestack-aws accordingly.

@ovanes
Copy link
Author

ovanes commented Jun 3, 2016

However, it needs to label nodes.

Actually, I tested the roundrobin creation and Kuberenetes automatiacally labelled the nodes.

To be honest, I don't understand, why this PR is not accepted, as your master branch is pretty broken right now. At least at the point, because Kuberenetes requires each AWS aspect to have a cluster ID tag like that: https://github.com/ovanes/kubestack-aws/blob/fd8671fa9f141c8e570af3520fd07fb442e0fd5c/kube_cluster.tf#L146 Without this cluster's behaviour is unstable and broken. Took me a while to find that out and fix it. So you project as it is now is just broken.

@@ -161,30 +233,14 @@ resource "aws_elb" "kube_master" {

# openssl
provisioner "local-exec" {
command = "sed 's|<MASTER_HOST>|${self.dns_name}|g' openssl/openssl.cnf.tpl > openssl/certs/openssl.cnf && cd openssl && source generate_certs.sh"
command = "mkdir -p openssl/certs && sed 's|<MASTER_HOST>|${self.dns_name}|g' openssl/openssl.cnf.tpl > openssl/certs/openssl.cnf && cd openssl && source generate_certs.sh"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+100

Master breaks on me right here 💔

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants