✨ finalize v1alpha3#518
Conversation
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: sbueringer The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
5f4b89d to
88f8f18
Compare
b47431d to
a56a039
Compare
|
@hidekazuna @jichenjc @prankul88 Sorry for the big PR but I think it's best to just get it over with :). I synced basically all I could find with CAPA. I think it's a huge improvement. I would like to create a RC.0 after this PR is merged P.S. #491 is now #518 + Zuul job. The Zuul job is almost successful. Just 6 conformance test failures I have to debug further. P.S. If you another combination of CAPI/Kubernetes/OpenStack that you have tested and I can add to the compatibility matrix in the README.md please let me know |
|
@sbueringer Great. Trying it now |
| spec: | ||
| clusterNetwork: | ||
| pods: | ||
| cidrBlocks: ["192.168.0.0/16"] # CIDR block used by Calico. |
There was a problem hiding this comment.
I remember someone reported some issue on the pod conflict before
maybe we can consider to make this configurable later on
There was a problem hiding this comment.
I think we have to decide which is mandatory configuration and what can be adjusted manual in the YAML. When you do the quickstart I think you should not be force to expose an env variable for everything. Right now we already need 10-20 env variables for OpenStack :/ That's why I though stuff like this should be changed manually after generating the cluster YAML. (same for node cidr). So not adding a var here only means you have to edit your YAML after generation and you are not forced to set the env var to generate your YAML
What do you think?
There was a problem hiding this comment.
um.. can this kind of env to be default value then anyone can feel free to change it later on?
I mean, I am ok to ask folks to do the configuration by themselves but for those which might be affected , provide a default value and provide flexibility is good way
we can merge your PR then talk later on :)
There was a problem hiding this comment.
I'm not sure how we can provide a default value. The only way would be to add it to the envrc
There was a problem hiding this comment.
ok, let's forget this, I will see what we can do after this PR merged
There was a problem hiding this comment.
Okay no problem :). It's just that we have a lot of properties and I don't want to force everyone to set all of them. I think we need some kind of basic set which has to be configured and for special cases the generated YAML has to be edited. We also need a good property documentation for this. Preferably the documentation generated in the CRDS which can be shown by kubectl explain is enough.
There was a problem hiding this comment.
@jichenjc Ya I faced this issue as my host ip is in the same range. But as @sbueringer says it might be better to just adjust it manually. I have no issues there.
| @@ -0,0 +1,200 @@ | |||
| --- | |||
There was a problem hiding this comment.
There really isn't anything that I can tell different between the -high-availability.yaml template and this one, other than this one hardcodes the replicas on the KubeadmControlPlane, I think you can likely just expose replicas here and not have a separate high-availability template.
There was a problem hiding this comment.
Problem is that the ControlPlaneEndpoint is configured differently. In one case we're using Octavia to create a LB in the other case just a floating ip on the machine
HA case:
OpenStackCluster:
managedAPIServerLoadBalancer: true
apiServerLoadBalancerFloatingIP: ${OPENSTACK_CONTROLPLANE_IP}
apiServerLoadBalancerPort: 6443
apiServerLoadBalancerAdditionalPorts:
- 22
useOctavia: true
Single-node master:
OpenStackCluster
controlPlaneEndpoint:
host: ${OPENSTACK_CONTROLPLANE_IP}
port: 6443
useOctavia: false
OpenStackMachineTemplate: (${CLUSTER_NAME}-control-plane)
floatingIP: ${OPENSTACK_CONTROLPLANE_IP}
The single node variant is mostly there because not everyone has Octavia (or rather LBaaS) support.
So maybe the naming is just wrong. Actually it's one variant with LBaaS (with Neutron lbaasv2 or Octavia) and one variant with Floating IP on the one master node instead of an LB.
There was a problem hiding this comment.
I'll think this through later and probably rename the flavors, e.g.
- default => with LBaaS (Octavia or Neutron LBaaS v2)
- -without-lbaas => single node master without lbaas (maybe I'll find a better name for it)
There was a problem hiding this comment.
I changed it as described and called the non-default flavor "without-lb"
@jichenjc You have to use the other flavor now in case you're testing without lbaas
7f7b3b9 to
f9482c8
Compare
detiber
left a comment
There was a problem hiding this comment.
One minor nit around checking for a nil pointer when inspecting machine.Spec.FailureDomain, otherwise lgtm
| readOnly: true | ||
| joinConfiguration: | ||
| nodeRegistration: | ||
| name: '{{ ds.meta_data.name }}' |
There was a problem hiding this comment.
Could you teach me why local_hostname changed to ds.meta_data.name?
There was a problem hiding this comment.
I tried to use ds.meta_data.local_hostname like capa. But that one wasn't set in my devstack. So I though ds.metadata might be better then just local_hostname. But I can switch it back to local_hostname if you want. I only want something which works in all of our cases. Not sure what the best one is.
There was a problem hiding this comment.
Yes, I also tried using ds.meta_data.local_hostname but it includes domain name like novalocal which is not compatible with OpenStack cloud provider. OpenStack cloud provider tried to find host without domain name and failed. Instead, local_hostname does not include domain name so was successfully ended. ds.meta_data.name is also missing domain name so that is would work.
There was a problem hiding this comment.
Fixed (sorry I quashed for the final merge, but just check the templates :))
f70e3c3 to
c083cc9
Compare
|
@prankul88 @jichenjc @hidekazuna After the merge I would directly create an rc.0 I would also open a follow-up PR to finalize the P.S. we also have our first successful e2e conformance test with Kubernetes 1.17.3: #491 (comment) (based on this PR + zuul job) |
|
lgtm |
ce77116 to
fdeb0fc
Compare
|
@prankul88 @jichenjc @hidekazuna Can one of you /lgtm this PR please, if there are no open issues? :) I have the time today to test the rc.0 :) |
|
/lgtm |
|
/unhold |
|
/hold cancel Thank you :) |
* finalize v1alpha3 * review fixes * review fixes
What this PR does / why we need it:
After this PR we should be completely compatible with CAPI quickstart
Currently WIP hopefully finished at the latest on Friday
Note: Ignore the first three commits if you already want to take a look at this PR. First two will be merged before and the third one afterwards as soon as I get the Zuul tests green.
See also: #499
See also the CAPI book PR: kubernetes-sigs/cluster-api#2683
(can be tested by checking out the CAPI branch and using
make serve-bookto serve a local version of the book)