[WIP] Allow different public and private DNS hostnames#845
[WIP] Allow different public and private DNS hostnames#845bogdando wants to merge 5 commits intoopenshift:masterfrom bogdando:custom_dns_prefixes
Conversation
|
Can one of the admins verify this patch?
|
2 similar comments
|
Can one of the admins verify this patch?
|
|
Can one of the admins verify this patch?
|
|
It works for me now! PTAL folks |
|
@tomassedovic reworked to allow arbitrary hostnames managed by cloud-init, deployment now uses |
tomassedovic
left a comment
There was a problem hiding this comment.
Thanks! I really like the hostname role removal, but it needs a few fixes before that can really work.
The way I see it though, the primary motivation for this feature is when you only have a single DNS with one zone.
See e.g. the red hat reference architecture:
It uses a single DNS server with one zone and one view. That scenario needs to be supported.
I like that this supports a private and public zone as well, but we need to be able to do a deployment on a single zone+view.
|
|
||
| vars['openshift_public_hostname'] = server.name | ||
| if 'private_fqdn' in server.metadata: | ||
| vars['openshift_hostname'] = server.metadata.private_fqdn |
There was a problem hiding this comment.
This must always be set. Even when the public and private hostnames are equal.
If openshift_hostname is not set, OpenShift will try to look up servers via their hostnames and on openstack, these can have a suffix. E.g. my master hostname with this patch is master-0.openshift.example.com.rdocloud, but OpenShift is looking for master-0.openshift.example.com.
The hostname role handled that so if we want to remove it, we have to set this correctly ourselves.
Adding these two lines should fix it:
else:
vars['openshift_hostname'] = server.name
There was a problem hiding this comment.
The deployment works for me with arbitrary hostnames. I removed any dependencies on the host names. As I understood from some folks' comments from related patches, only the openshift_public_hostname needs to be set, as you noted. And it is set. The openshift_hostname is only needed to be set if internal and public naming differs. So this addresses exactly that case. Although I may be wrong and didn't test all the possible cases.
| {{ host }}{% if 'ansible_host' in hostvars[host] | ||
| %} ansible_host={{ hostvars[host]['ansible_host'] }}{% endif %} | ||
| {% if 'openshift_hostname' in hostvars[host] | ||
| %} openshift_hostname={{ hostvars[host]['openshift_hostname'] }}{% endif %} |
There was a problem hiding this comment.
Same thing as with the dynamic inventory script here: openshift_hostname must always be set. If there is no private_fqdn, this must have the same value as openshift_public_hostname.
| params: | ||
| cluster_id: {{ stack_name }} | ||
| k8s_type: {{ etcd_hostname | default('etcd') }} | ||
| {% if openshift_openstack_private_dns_domain != openshift_openstack_public_dns_domain %} |
There was a problem hiding this comment.
Same in all these: the check should be for openshift_openstack_full_private_dns_domain and openshift_openstack_full_public_dns_domain.
There was a problem hiding this comment.
Also, can we just skip the if altogether? And assume that if private is not set, it will default to public?
That should make the templates simpler and not harm anything.
There was a problem hiding this comment.
I'll try to rework this, good idea
| k8s_type: {{ etcd_hostname | default('etcd') }} | ||
| {% if openshift_openstack_private_dns_domain != openshift_openstack_public_dns_domain %} | ||
| private_fqdn: {{ etcd_hostname | default('etcd') }}-%index%.{{ openshift_openstack_private_dns_domain }} | ||
| {% endif %} |
There was a problem hiding this comment.
This should say openshift_openstack_full_public_dns_domain as well, otherwise the value here is without the prefix.
| description: Name | ||
| description: Public (FQDN) Name | ||
|
|
||
| {% if openshift_openstack_private_dns_domain != openshift_openstack_public_dns_domain %} |
There was a problem hiding this comment.
Shouldn't this compare openshift_openstack_full_private_dns_domain and openshift_openstack_full_public_dns_domain?
Otherwise private_fqdn won't get set even when the openshift_openstack_public_hostname_suffix and openshift_openstack_private_hostname_suffix values differ.
| host-type: { get_param: type } | ||
| sub-host-type: { get_param: subtype } | ||
| node_labels: { get_param: node_labels } | ||
| {% if openshift_openstack_private_dns_domain != openshift_openstack_public_dns_domain %} |
There was a problem hiding this comment.
And the same here, it should compare the _full_ domains here otherwise the prefix gets ignored.
| set_fact: | ||
| private_named_records: | ||
| - view: "private" | ||
| zone: "{{ openshift_openstack_full_public_dns_domain }}" |
There was a problem hiding this comment.
Since this is using the public zone, the DNS servers never receive the entries with the private prefixes.
I think this task should say "private" everywhere here.
There was a problem hiding this comment.
FWIW, that is the main point for multiple names resolvable via internal view. Public names should be as well resolved, yet with priv IPs. It doesn't work another way. PTAL the test case in the commit msg, it describes the behavior that matches this code. Firstly, let's adjust the test cases if I understood the thing wrong
| # and the private DNS domain. | ||
| openshift_openstack_public_hostname_suffix: "{{ openshift_openstack_clusterid }}" | ||
| openshift_openstack_private_hostname_suffix: "{{ openshift_openstack_clusterid }}" | ||
| openshift_openstack_private_dns_domain: "{{ openshift_openstack_public_dns_domain }}" |
There was a problem hiding this comment.
Can we set a default value so this doesn't break existing users?
There was a problem hiding this comment.
it has it at the line 3, transitioned via the line 10
|
Also, I wonder whether setting the openshift hostname and IP options like we do here means we could do the deployment without requiring the internal DNS whatsoever. I'll go & try to test that. If we can, that'll be fantastic news for the move to openshift-ansible, because they don't want to ship DNS there. |
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
Keep using public FQDN for hostnames. Make DNS server to resolve both priv/pub FQDNs via internal network and only external FQDNs via external access. Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
When private FQDNs differ from public ones, add the public domain into the private multi-zoned view (or private DNS server) to contain public FQDNs as well, yet resolving to private IPs. So it can resolve both private and public FQDNs as internal IPs for the cluster control network. Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
Instead, allow the cloud-init to do its job. Ignore the hostnames for provisioned VMs. Rely on openshift-ansible inventory variables to configure openshift clusters. When a private FQDN differs from the public FQDN, define 'private_fqdn' metadata for provisioned with Heat Nova servers. Place 'openshift_(public)_ip/hostname' vars into a static and dynamic inventory depending on the public/private IP and FQDNs setup. Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
It's time to resubmit on top of openshift/openshift-ansible#6039 |
;-(
It seems like exactly the case, right. See this comment as an example for AWS It looks like they do not need DNS resolve at all to deploy. Not sure how things work for workloads (e2e) relying on SkyDNS or the like:
|
|
@tomassedovic btw, I hope we could merge it here, once I submit those fixes for |
|
DNS management is deprecated for the provider. And the provider had been moved into openshift-ansible w/o DNS setup bits. |
|
Reimplemented as openshift/openshift-ansible#6349 |
What does this PR do?
Allow different public and private DNS hostnames.
Do not manage hostnames with openstack provider any more.
Instead, allow the cloud-init to do its job.
Ignore the hostnames for provisioned VMs. Rely on openshift-ansible
inventory variables to configure openshift clusters.
When a private FQDN differs from the public FQDN,
define 'private_fqdn' metadata for provisioned with Heat Nova servers.
Place 'openshift_(public)_ip/hostname' vars into a static and
dynamic inventory depending on the public/private IP and FQDNs setup.
How should this be manually tested?
e2e with defaults should pass
e2e with arbitrary hostnames of provisioned VMs should pass.
Provision a cluster with
-e openshift_openstack_private_hostname_suffix: openshift-privand-e openshift_openstack_private_dns_domain: example.local.(that depends on the hosts' search domain setup in /etc/resolv.conf, so this one is allowed to fail)
Deploy a cluster with
-e openshift_openstack_private_hostname_suffix: openshift-privand-e openshift_openstack_private_dns_domain: example.local. E2e should pass with default and arbitrary hostnames of provisioned VMs.Provision using the same custom vars as above, but with an external DNS server deployed by osp 10 ref arch and multi-master setup. Expectations are the same as above.
Maintain the change compatible with
openshift/openshift-ansible#6039
Is there a relevant Issue open for this?
Provide a link to any open issues that describe the problem you are solving.
Who would you like to review this?
cc: @tomassedovic @oybed PTAL