Skip to content
This repository was archived by the owner on Dec 9, 2020. It is now read-only.

[WIP] Allow different public and private DNS hostnames#845

Closed
bogdando wants to merge 5 commits intoopenshift:masterfrom
bogdando:custom_dns_prefixes
Closed

[WIP] Allow different public and private DNS hostnames#845
bogdando wants to merge 5 commits intoopenshift:masterfrom
bogdando:custom_dns_prefixes

Conversation

@bogdando
Copy link
Contributor

@bogdando bogdando commented Nov 9, 2017

What does this PR do?

Allow different public and private DNS hostnames.
Do not manage hostnames with openstack provider any more.
Instead, allow the cloud-init to do its job.
Ignore the hostnames for provisioned VMs. Rely on openshift-ansible
inventory variables to configure openshift clusters.

When a private FQDN differs from the public FQDN,
define 'private_fqdn' metadata for provisioned with Heat Nova servers.
Place 'openshift_(public)_ip/hostname' vars into a static and
dynamic inventory depending on the public/private IP and FQDNs setup.

How should this be manually tested?

  • e2e with defaults should pass

  • e2e with arbitrary hostnames of provisioned VMs should pass.

  • Provision a cluster with -e openshift_openstack_private_hostname_suffix: openshift-priv and -e openshift_openstack_private_dns_domain: example.local.

    • The stack name, nova servers names, ansible inventory hostnames should keep using the public name prefix (defaults to cluster id, 'openshift') and belong to the public DNS domain
    • nslookup by short name executed on nodes should resolve with the public FQDN "autocompleted"
      (that depends on the hosts' search domain setup in /etc/resolv.conf, so this one is allowed to fail)
    • nslookup by public FQDNs should resolve (with a floating IP provided) via external queries against the public DNS server
    • nslookup by private FQDNs should NOT resolve via external queries against the public DNS server
    • nslookup by private FQDNs executed on nodes should resolve via internal DNS server
    • nslookup by public FQDNs executed on nodes should resolve via internal DNS server, yet with a private IP, not the floating IP
  • Deploy a cluster with -e openshift_openstack_private_hostname_suffix: openshift-priv and -e openshift_openstack_private_dns_domain: example.local. E2e should pass with default and arbitrary hostnames of provisioned VMs.

  • Provision using the same custom vars as above, but with an external DNS server deployed by osp 10 ref arch and multi-master setup. Expectations are the same as above.

Maintain the change compatible with
openshift/openshift-ansible#6039

Is there a relevant Issue open for this?

Provide a link to any open issues that describe the problem you are solving.

Who would you like to review this?

cc: @tomassedovic @oybed PTAL

@openshift-bot
Copy link

Can one of the admins verify this patch?
I understand the following commands:

  • bot, add author to whitelist
  • bot, test pull request
  • bot, test pull request once

2 similar comments
@openshift-bot
Copy link

Can one of the admins verify this patch?
I understand the following commands:

  • bot, add author to whitelist
  • bot, test pull request
  • bot, test pull request once

@openshift-bot
Copy link

Can one of the admins verify this patch?
I understand the following commands:

  • bot, add author to whitelist
  • bot, test pull request
  • bot, test pull request once

@bogdando bogdando added the osp label Nov 9, 2017
@bogdando bogdando changed the title Allow different public and private DNS hostnames [WIP] Allow different public and private DNS hostnames Nov 9, 2017
@bogdando bogdando changed the title [WIP] Allow different public and private DNS hostnames Allow different public and private DNS hostnames Nov 10, 2017
@bogdando
Copy link
Contributor Author

It works for me now! PTAL folks

@bogdando
Copy link
Contributor Author

bogdando commented Nov 10, 2017

@tomassedovic reworked to allow arbitrary hostnames managed by cloud-init, deployment now uses openshift_(public)_hostname, f.e. like in this comment openshift/openshift-ansible#5883 (comment) described for AWS case.

Copy link
Contributor

@tomassedovic tomassedovic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! I really like the hostname role removal, but it needs a few fixes before that can really work.

The way I see it though, the primary motivation for this feature is when you only have a single DNS with one zone.

See e.g. the red hat reference architecture:

https://access.redhat.com/documentation/en-us/reference_architectures/2017/html-single/deploying_and_managing_red_hat_openshift_container_platform_3.6_on_red_hat_openstack_platform_10/

It uses a single DNS server with one zone and one view. That scenario needs to be supported.

I like that this supports a private and public zone as well, but we need to be able to do a deployment on a single zone+view.


vars['openshift_public_hostname'] = server.name
if 'private_fqdn' in server.metadata:
vars['openshift_hostname'] = server.metadata.private_fqdn
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This must always be set. Even when the public and private hostnames are equal.

If openshift_hostname is not set, OpenShift will try to look up servers via their hostnames and on openstack, these can have a suffix. E.g. my master hostname with this patch is master-0.openshift.example.com.rdocloud, but OpenShift is looking for master-0.openshift.example.com.

The hostname role handled that so if we want to remove it, we have to set this correctly ourselves.

Adding these two lines should fix it:

else:
    vars['openshift_hostname'] = server.name

Copy link
Contributor Author

@bogdando bogdando Nov 17, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The deployment works for me with arbitrary hostnames. I removed any dependencies on the host names. As I understood from some folks' comments from related patches, only the openshift_public_hostname needs to be set, as you noted. And it is set. The openshift_hostname is only needed to be set if internal and public naming differs. So this addresses exactly that case. Although I may be wrong and didn't test all the possible cases.

{{ host }}{% if 'ansible_host' in hostvars[host]
%} ansible_host={{ hostvars[host]['ansible_host'] }}{% endif %}
{% if 'openshift_hostname' in hostvars[host]
%} openshift_hostname={{ hostvars[host]['openshift_hostname'] }}{% endif %}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same thing as with the dynamic inventory script here: openshift_hostname must always be set. If there is no private_fqdn, this must have the same value as openshift_public_hostname.

params:
cluster_id: {{ stack_name }}
k8s_type: {{ etcd_hostname | default('etcd') }}
{% if openshift_openstack_private_dns_domain != openshift_openstack_public_dns_domain %}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same in all these: the check should be for openshift_openstack_full_private_dns_domain and openshift_openstack_full_public_dns_domain.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, can we just skip the if altogether? And assume that if private is not set, it will default to public?

That should make the templates simpler and not harm anything.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll try to rework this, good idea

k8s_type: {{ etcd_hostname | default('etcd') }}
{% if openshift_openstack_private_dns_domain != openshift_openstack_public_dns_domain %}
private_fqdn: {{ etcd_hostname | default('etcd') }}-%index%.{{ openshift_openstack_private_dns_domain }}
{% endif %}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should say openshift_openstack_full_public_dns_domain as well, otherwise the value here is without the prefix.

description: Name
description: Public (FQDN) Name

{% if openshift_openstack_private_dns_domain != openshift_openstack_public_dns_domain %}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this compare openshift_openstack_full_private_dns_domain and openshift_openstack_full_public_dns_domain?

Otherwise private_fqdn won't get set even when the openshift_openstack_public_hostname_suffix and openshift_openstack_private_hostname_suffix values differ.

host-type: { get_param: type }
sub-host-type: { get_param: subtype }
node_labels: { get_param: node_labels }
{% if openshift_openstack_private_dns_domain != openshift_openstack_public_dns_domain %}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And the same here, it should compare the _full_ domains here otherwise the prefix gets ignored.

set_fact:
private_named_records:
- view: "private"
zone: "{{ openshift_openstack_full_public_dns_domain }}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this is using the public zone, the DNS servers never receive the entries with the private prefixes.

I think this task should say "private" everywhere here.

Copy link
Contributor Author

@bogdando bogdando Nov 17, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW, that is the main point for multiple names resolvable via internal view. Public names should be as well resolved, yet with priv IPs. It doesn't work another way. PTAL the test case in the commit msg, it describes the behavior that matches this code. Firstly, let's adjust the test cases if I understood the thing wrong

# and the private DNS domain.
openshift_openstack_public_hostname_suffix: "{{ openshift_openstack_clusterid }}"
openshift_openstack_private_hostname_suffix: "{{ openshift_openstack_clusterid }}"
openshift_openstack_private_dns_domain: "{{ openshift_openstack_public_dns_domain }}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we set a default value so this doesn't break existing users?

Copy link
Contributor Author

@bogdando bogdando Nov 17, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it has it at the line 3, transitioned via the line 10

@tomassedovic
Copy link
Contributor

Also, I wonder whether setting the openshift hostname and IP options like we do here means we could do the deployment without requiring the internal DNS whatsoever.

I'll go & try to test that. If we can, that'll be fantastic news for the move to openshift-ansible, because they don't want to ship DNS there.

Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
Keep using public FQDN for hostnames.
Make DNS server to resolve both priv/pub FQDNs via internal network and only
external FQDNs via external access.

Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
When private FQDNs differ from public ones, add the public domain into the
private multi-zoned view (or private DNS server) to contain public FQDNs
as well, yet resolving to private IPs. So it can resolve both private and public
FQDNs as internal IPs for the cluster control network.

Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
Instead, allow the cloud-init to do its job.
Ignore the hostnames for provisioned VMs. Rely on openshift-ansible
inventory variables to configure openshift clusters.

When a private FQDN differs from the public FQDN,
define 'private_fqdn' metadata for provisioned with Heat Nova servers.
Place 'openshift_(public)_ip/hostname' vars into a static and
dynamic inventory depending on the public/private IP and FQDNs setup.

Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
@bogdando
Copy link
Contributor Author

It's time to resubmit on top of openshift/openshift-ansible#6039

@bogdando
Copy link
Contributor Author

bogdando commented Nov 17, 2017

because they don't want to ship DNS there

;-(

setting the openshift hostname and IP options like we do here means we could do the deployment
without requiring the internal DNS whatsoever.

It seems like exactly the case, right. See this comment as an example for AWS It looks like they do not need DNS resolve at all to deploy. Not sure how things work for workloads (e2e) relying on SkyDNS or the like:

Generally the only things that need to resolve externally are the
API and any domains, which ought to be going through a LB

@bogdando
Copy link
Contributor Author

@tomassedovic btw, I hope we could merge it here, once I submit those fixes for _full_ != ..., and hope to see it either addressed in the openshift-ansible as well, or not.

@bogdando bogdando changed the title Allow different public and private DNS hostnames [WIP] Allow different public and private DNS hostnames Nov 17, 2017
@bogdando
Copy link
Contributor Author

DNS management is deprecated for the provider. And the provider had been moved into openshift-ansible w/o DNS setup bits.

@bogdando bogdando closed this Nov 28, 2017
@bogdando bogdando deleted the custom_dns_prefixes branch November 28, 2017 11:28
@bogdando
Copy link
Contributor Author

bogdando commented Dec 6, 2017

Reimplemented as openshift/openshift-ansible#6349

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants