Skip to content

Comments

✨ finalize v1alpha3#518

Merged
k8s-ci-robot merged 3 commits intokubernetes-sigs:masterfrom
sbueringer:pr-impl-templating
Mar 18, 2020
Merged

✨ finalize v1alpha3#518
k8s-ci-robot merged 3 commits intokubernetes-sigs:masterfrom
sbueringer:pr-impl-templating

Conversation

@sbueringer
Copy link
Member

@sbueringer sbueringer commented Mar 11, 2020

What this PR does / why we need it:
After this PR we should be completely compatible with CAPI quickstart

Currently WIP hopefully finished at the latest on Friday

Note: Ignore the first three commits if you already want to take a look at this PR. First two will be merged before and the third one afterwards as soon as I get the Zuul tests green.

See also: #499

See also the CAPI book PR: kubernetes-sigs/cluster-api#2683
(can be tested by checking out the CAPI branch and using make serve-book to serve a local version of the book)

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Mar 11, 2020
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: sbueringer

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added approved Indicates a PR has been approved by an approver from all required OWNERS files. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. labels Mar 11, 2020
@sbueringer sbueringer force-pushed the pr-impl-templating branch 3 times, most recently from 5f4b89d to 88f8f18 Compare March 13, 2020 09:34
@kubernetes-sigs kubernetes-sigs deleted a comment from theopenlab-ci bot Mar 13, 2020
@sbueringer sbueringer changed the title ✨ [WIP] Implement YAML templating for clusterctl v1alph3 ✨ [WIP] Implement YAML templating for clusterctl v1alpha3 Mar 15, 2020
@k8s-ci-robot k8s-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Mar 15, 2020
@sbueringer sbueringer force-pushed the pr-impl-templating branch 4 times, most recently from b47431d to a56a039 Compare March 15, 2020 18:37
@sbueringer sbueringer changed the title ✨ [WIP] Implement YAML templating for clusterctl v1alpha3 ✨ finalize v1alpha3 Mar 15, 2020
@sbueringer
Copy link
Member Author

sbueringer commented Mar 15, 2020

@hidekazuna @jichenjc @prankul88
Should be ready for review now. Please also take a look at the book PR linked above and if you want to test with clusterctl take a look at the create-cluster target in the Makefile

Sorry for the big PR but I think it's best to just get it over with :). I synced basically all I could find with CAPA. I think it's a huge improvement. I would like to create a RC.0 after this PR is merged

P.S. #491 is now #518 + Zuul job. The Zuul job is almost successful. Just 6 conformance test failures I have to debug further.

P.S. If you another combination of CAPI/Kubernetes/OpenStack that you have tested and I can add to the compatibility matrix in the README.md please let me know

@kubernetes-sigs kubernetes-sigs deleted a comment from theopenlab-ci bot Mar 15, 2020
@kubernetes-sigs kubernetes-sigs deleted a comment from theopenlab-ci bot Mar 15, 2020
@kubernetes-sigs kubernetes-sigs deleted a comment from theopenlab-ci bot Mar 15, 2020
@kubernetes-sigs kubernetes-sigs deleted a comment from theopenlab-ci bot Mar 15, 2020
@kubernetes-sigs kubernetes-sigs deleted a comment from theopenlab-ci bot Mar 15, 2020
@kubernetes-sigs kubernetes-sigs deleted a comment from theopenlab-ci bot Mar 15, 2020
@kubernetes-sigs kubernetes-sigs deleted a comment from theopenlab-ci bot Mar 15, 2020
@kubernetes-sigs kubernetes-sigs deleted a comment from theopenlab-ci bot Mar 15, 2020
@kubernetes-sigs kubernetes-sigs deleted a comment from theopenlab-ci bot Mar 15, 2020
@kubernetes-sigs kubernetes-sigs deleted a comment from theopenlab-ci bot Mar 15, 2020
@kubernetes-sigs kubernetes-sigs deleted a comment from theopenlab-ci bot Mar 15, 2020
@prankul88
Copy link
Contributor

@sbueringer Great. Trying it now

Copy link
Contributor

@jichenjc jichenjc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great, need further hands on test or leverage your CI :)

spec:
clusterNetwork:
pods:
cidrBlocks: ["192.168.0.0/16"] # CIDR block used by Calico.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I remember someone reported some issue on the pod conflict before
maybe we can consider to make this configurable later on

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we have to decide which is mandatory configuration and what can be adjusted manual in the YAML. When you do the quickstart I think you should not be force to expose an env variable for everything. Right now we already need 10-20 env variables for OpenStack :/ That's why I though stuff like this should be changed manually after generating the cluster YAML. (same for node cidr). So not adding a var here only means you have to edit your YAML after generation and you are not forced to set the env var to generate your YAML

What do you think?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

um.. can this kind of env to be default value then anyone can feel free to change it later on?
I mean, I am ok to ask folks to do the configuration by themselves but for those which might be affected , provide a default value and provide flexibility is good way

we can merge your PR then talk later on :)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure how we can provide a default value. The only way would be to add it to the envrc

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, let's forget this, I will see what we can do after this PR merged

Copy link
Member Author

@sbueringer sbueringer Mar 16, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay no problem :). It's just that we have a lot of properties and I don't want to force everyone to set all of them. I think we need some kind of basic set which has to be configured and for special cases the generated YAML has to be edited. We also need a good property documentation for this. Preferably the documentation generated in the CRDS which can be shown by kubectl explain is enough.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jichenjc Ya I faced this issue as my host ip is in the same range. But as @sbueringer says it might be better to just adjust it manually. I have no issues there.

@@ -0,0 +1,200 @@
---
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There really isn't anything that I can tell different between the -high-availability.yaml template and this one, other than this one hardcodes the replicas on the KubeadmControlPlane, I think you can likely just expose replicas here and not have a separate high-availability template.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Problem is that the ControlPlaneEndpoint is configured differently. In one case we're using Octavia to create a LB in the other case just a floating ip on the machine

HA case:
OpenStackCluster:

 managedAPIServerLoadBalancer: true
  apiServerLoadBalancerFloatingIP: ${OPENSTACK_CONTROLPLANE_IP}
  apiServerLoadBalancerPort: 6443
  apiServerLoadBalancerAdditionalPorts:
  - 22
  useOctavia: true

Single-node master:
OpenStackCluster

controlPlaneEndpoint:
    host: ${OPENSTACK_CONTROLPLANE_IP}
    port: 6443
useOctavia: false

OpenStackMachineTemplate: (${CLUSTER_NAME}-control-plane)

floatingIP: ${OPENSTACK_CONTROLPLANE_IP}

The single node variant is mostly there because not everyone has Octavia (or rather LBaaS) support.

So maybe the naming is just wrong. Actually it's one variant with LBaaS (with Neutron lbaasv2 or Octavia) and one variant with Floating IP on the one master node instead of an LB.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll think this through later and probably rename the flavors, e.g.

  • default => with LBaaS (Octavia or Neutron LBaaS v2)
  • -without-lbaas => single node master without lbaas (maybe I'll find a better name for it)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I changed it as described and called the non-default flavor "without-lb"

@jichenjc You have to use the other flavor now in case you're testing without lbaas

Copy link

@detiber detiber left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One minor nit around checking for a nil pointer when inspecting machine.Spec.FailureDomain, otherwise lgtm

readOnly: true
joinConfiguration:
nodeRegistration:
name: '{{ ds.meta_data.name }}'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you teach me why local_hostname changed to ds.meta_data.name?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried to use ds.meta_data.local_hostname like capa. But that one wasn't set in my devstack. So I though ds.metadata might be better then just local_hostname. But I can switch it back to local_hostname if you want. I only want something which works in all of our cases. Not sure what the best one is.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I also tried using ds.meta_data.local_hostname but it includes domain name like novalocal which is not compatible with OpenStack cloud provider. OpenStack cloud provider tried to find host without domain name and failed. Instead, local_hostname does not include domain name so was successfully ended. ds.meta_data.name is also missing domain name so that is would work.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed (sorry I quashed for the final merge, but just check the templates :))

@sbueringer
Copy link
Member Author

sbueringer commented Mar 18, 2020

@prankul88 @jichenjc @hidekazuna
As far as I can see all findings are addressed. I would propose merging the PR.

After the merge I would directly create an rc.0

I would also open a follow-up PR to finalize the RELEASE.md (it's easier if I'm just doing the process for the rc and then adjust the doc if necessary) and upgrade to the latest cluster-api dependency

P.S. we also have our first successful e2e conformance test with Kubernetes 1.17.3: #491 (comment) (based on this PR + zuul job)

@jichenjc
Copy link
Contributor

lgtm
leave to @hidekazuna @prankul88 take another look

@sbueringer
Copy link
Member Author

@prankul88 @jichenjc @hidekazuna Can one of you /lgtm this PR please, if there are no open issues? :)

I have the time today to test the rc.0 :)

@jichenjc
Copy link
Contributor

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Mar 18, 2020
@jichenjc
Copy link
Contributor

/unhold

@k8s-ci-robot k8s-ci-robot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Mar 18, 2020
@sbueringer
Copy link
Member Author

/hold cancel

Thank you :)

@k8s-ci-robot k8s-ci-robot merged commit 7cdee0d into kubernetes-sigs:master Mar 18, 2020
@sbueringer sbueringer deleted the pr-impl-templating branch March 18, 2020 08:43
pierreprinetti pushed a commit to shiftstack/cluster-api-provider-openstack that referenced this pull request Apr 22, 2024
* finalize v1alpha3

* review fixes

* review fixes
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants