-
Notifications
You must be signed in to change notification settings - Fork 269
[WIP] Add support for chained plugins. #179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: squeed The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
/label retest-not-required |
|
TODO: document the final conclusions in README. |
|
|
||
| // ChainedPlugins is an array of entries to insert as chained plugins for execution | ||
| // after the default network. | ||
| ChainedPlugins []string `json:"chainedPlugins,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not super happy about this API. We should probably give ourselves room to expand. For example, we could also include an explicit tuning configuration, rather than requiring raw CNI.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should say what the strings are supposed to be; eg CNI JSON configuration in string form.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, changed the API slightly, and added better documentation.
| data.Data["NodeConfig"] = nodeCfg | ||
|
|
||
| data.Data["CNIConfig"] = makeCNIConfig(conf, CNINetworkName, "0.3.1", | ||
| `{"type": "openshift-sdn"}`) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not super happy about this, can we just have the SDN itself do the combination? eg the CNO makes the template from the ChainedPlugins and the SDN inserts itself at position 0 in the list and then writes the result conflist to the final CNI configdir that CRIO looks for?
eg I'd rather have the SDN plugin itself handle its config rather than encode logic about the conf/type/version/etc in the operator...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was going to say something similar before, but there is already a crazy amount of coupling between the operator and the SDN plugin itself anyway. This doesn't really make it worse. (And besides, what happens if one of the chained plugins wants to use some feature that requires using version 0.4.0 of the spec?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I thought about that for a long time, and couldn't find a solution I like.
First of all, the cniVersion is a tricky coordination point no matter where you place it. It needs to be the same for all plugins, and all plugins need to support the selected versions. If we bump openshift-sdn to 0.4.0, then all chained plugins instantly must support 0.4.0. I suppose this is a problem with CNI, and we may wish to revisit that decision. Wherever we do the config munging doesn't fix this problem.
As for having openshift-sdn insert itself, I decided against that for a few reasons:
- In general, manipulating untyped json is a pain in go :-)
- This is easier add to other plugins, e.g. ovn. We can just write a sidecar in bash that waits for healthz and copies a file.
- Generally, CNI configuration is expected to be administrator-defined. openshift-sdn writes it's own configuration file for availability signalling and historical reasons. And because it has no knobs.
I'm happy to revisit this. It's not great either way you look at it.
This adds an additional API configuration field for chained plugins to install their desired CNI config. Then, the operator plumbs this through to openshift-sdn.
|
@squeed: The following tests failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
| Other values are ignored. If you wish to use use a third-party network provider not managed by the operator, set the network type to something meaningful to you. The operator will not install or upgrade a network provider, but all other Network Operator functionality remains. | ||
|
|
||
| ### Adding chained plugins | ||
| You can add raw CNI configuration snippets to be installed as CNI chained plugins. These will be included in the CNI configuration file used by the default network provider. Note that these are *not* multus multiple networks. Rather, they are for plugins that manipulate the container's existing network. For example, if you want to adjust your container's sysctls, you can configure |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should note that this applies to all containers in the cluster.
|
So I keep having this thought that we should fork multus into "CNO CNI" which would be the top-level CNI plugin that would run for all configurations, and would include the existing multus functionality, plus anything else we needed to do either sometimes (chained Istio) or always (anti-MCS iptables hacks) or to some-but-not-all-pods regardless of network plugin (assign-macvlan), ... |
|
Alternatively, we can have Istio run via Multus. Casey verified that Istio's CNI plugin doesn't actually use PrevResult which means that it's not using anything that chained plugins actually provide. And since Multus looks in a specific CNO-defined directory for openshift-sdn's CNI config file before indicating node readiness, we don't have an ordering issue where Istio could come before openshift-sdn, because istio's CNI config will be in a different directory. Multus (and the NPWG spec) do not require that a network attachment return Interfaces or IPs which Istio clearly wouldn't. If we want all pods in the cluster to use Istio then we'd have to have some mutating webhook to add the Istio network attachment selection annotation to pods that need it and have some operator create the Istio AdditionalNetwork. The only issue we might have is the simple permissions structure that Multus currently implements for I think we can avoid making changes to the CNO API to support Istio for 4.2 that we aren't quite happy about and figure out what the API should actually be for Istio and other chained plugins next round. |
|
Yeah, closing this for now. We can revisit this as needed, |
This adds an additional API configuration field for chained plugins to install their desired CNI config. Then, the operator plumbs this through to openshift-sdn.
This needs openshift/origin#22898 to merge first.