-
Notifications
You must be signed in to change notification settings - Fork 295
Documentation on external DNS seems out of date #110
Comments
@robinmonjo Easiest way is to create the DNS record automatically by setting Also you can manually create a CNAME record in Route53 for the external LB. You don't need to modify the security group. |
Thank you for your answer. What if I create a CNAME for my load balancer directly into my zone file ? Will it work with the CA based authentication ? |
Yes, just point it to the ELB and that's it. The CA is set in your |
Just wanted to note the settings that are working for me (using kube-aws version v0.9.1-rc.4) hoping this may help others. As noted above by @camilb kube-aws should take care of your DNS automatically, provided the correct settings are in your cluster.yaml when you provision your cluster. I.e. with these settings in the cluster/projects cluster.yaml
After cluster deployment, we can validate our DNS record was added, below we just use a simple bash script to search our route53 zones for regex/records, and see that it now exists. This might be useful for dynamic/complex cluster names, etc.
So we have validated that our internal private route53 record is an alias for the AWS ELB and we can see that it resolves to our ELB with its three IP's for H/A, multi-zone, scaling, etc on the AWS side.
|
Thank you for your input. I havent dig much into the all route 53 config. However, adding the ESB domain name as a CNAME of my domain in my hosted zone file produce this error:
That what I was afraid of @camilb when I was talking about CA based authentication. |
Do you have the CA set in your And look for a line similar to this: Edit: The type of ELB is TCP, the certificates are still configured on controllers. And if you still get the errors after setting the CA in |
Yes I do, my config is fine and credentials paths are properly set. When I open my security group and create an A record with my cluster hostname and the public ip of the controller everything works fine. I don't know much about CA auth, but it is probably closely related to the DNS name of our cluster. The doc I pointed out in my first post precisely says:
|
If it works with an A record set directly to your controller, should work with CNAME on ELB too. I suppose they are the same, right? Like : kube.yourdomain.tld in A controller_ip_address kube.yourdomain.tld in CNAME elb_name.zone.elb.amazonaws.com |
Indeed it works, I misconfigured my zone file ... Thank you for your help @camilb. However I'd like to keep this issue open since the doc is still not right. |
Hi @robinmonjo, thanks for reporting this issue and sharing your experience! We're currently short on resources to improve documentation but would appreciate it you could author a pull request for it 👍 Also, this issue is now tracked in #90 |
Hello all,
I just setup a Kubernetes cluster with version
v0.9.1
. I see changes since version0.8
:The doc: https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws-launch.html, says:
I did what I did with version
0.8
, I set an A record in my zone file to the public IP address of my controller node. However it is now behind a load balancer and the security group of the controller prevent HTTPS access from "0.0.0.0" and constraints it to only the load balancer and the worker nodes.To access my cluster, I had to modify the security group and add a HTTPS inbound rule with "0.0.0.0". I guess I'm not supposed to do that. What are the recommandation here ?
Regards,
The text was updated successfully, but these errors were encountered: