-
Notifications
You must be signed in to change notification settings - Fork 462
Add a Spec.Configuration to MachineConfigPool #773
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a Spec.Configuration to MachineConfigPool #773
Conversation
|
This doesn't pass units and is definitely broken, just putting up this WIP for early review/tracking. |
|
be careful you dont break backwards compatibility with 4.1 and 4.2 clients. |
Hmm...are there any other clients? Maybe the console...I should probably try that out at some point. |
|
OK so the console definitely renders MachineConfigs and MachineConfigPools...but it's super primitive. And hum, looking at the console code it's TypeScript/React, interesting. I don't think what we're doing here would break it, conceptually we're just extending the schema. |
9358d23 to
28c04c2
Compare
See openshift#765 (comment) MachineConfigPool needs a `Spec.Configuration` and `Status.Configuration` [just like other objects][1] so that we can properly detect state. Currently there's a race because the render controller may set `Status.Configuration` while the pool's `Status` still has `Updated`, so one can't reliably check whether the pool is at a given config. With this, ownership is clear: the render controller sets the spec, and the node controller updates the status. [1] https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status)
28c04c2 to
37e8466
Compare
|
OK, did some more fixes and am lifting WIP on this one. Been playing with it interactively now and it is looking good, but let's see what e2e tests say! |
|
Hooray, e2e passed! 🎉 |
| machineCount := int32(len(nodes)) | ||
|
|
||
| updatedMachines := getUpdatedMachines(pool.Status.Configuration.Name, nodes) | ||
| updatedMachines := getUpdatedMachines(pool.Spec.Configuration.Name, nodes) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if the Spec of an object describes the desired state, shouldn't this routine use Status? we're interested in nodes at a given state (not the desired) - I may be missing something and re-reading https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm. We're trying to calculate the Updated/Updating conditions here, so we need to know how many nodes are at our desired state.
I'll admit to not being steeped in the art here - can you think of anything else similar to this?
But, it would feel very strange if these values followed Status - the updatedMachineCount would go down as we updated...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll admit to not being steeped in the art here - can you think of anything else similar to this?
the template controller is the only one I can think of - it uses Status to tell the render ctrl "I'm done"
but I believe you're right and we're just calculating where we are wrt desired state here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another way to think of this is; we probably should never have had Configuration.Name as part of Status, it should have always been in Spec.
Having parts of Status should be derived from other Status fields seems wrong.
In effect, the Status.Configuration.Name is more like what the CVO is doing with its history field.
Yet another argument; Status.Configuration is almost meaningless given that multiple configurations could have been rolled out. If any API client wants to know the state of the system, there's really either "fully updated" or "updating". Any more detail on "updating" really requires looking at each node for its currentConfig. The Status.Configuration isn't so useful for that.
|
just a question, LGTM otherwise /approve |
|
/lgtm |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: cgwalters, runcom The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
/cherrypick release-4.1 |
|
@runcom: #773 failed to apply on top of branch "release-4.1": DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
oh gosh |
Missed this as part of openshift#773 (Doesn't matter yet, just prep for any future changes)
See #765 (comment)
MachineConfigPool needs a
Spec.ConfigurationandStatus.Configuration[just like other objects][1] so that we can properly detect state.
Currently there's a race because the render controler may set
Status.Configurationwhile the pool's
Statusstill hasUpdated, so one can't check whether thepool is at a given config.
[1] https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status)