Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support out-of-process and out-of-tree cloud providers #88

Closed
errordeveloper opened this issue Sep 12, 2016 · 119 comments
Closed

Support out-of-process and out-of-tree cloud providers #88

errordeveloper opened this issue Sep 12, 2016 · 119 comments
Assignees
Labels
kind/feature Categorizes issue or PR as related to a new feature. sig/cloud-provider Categorizes an issue or PR as relevant to SIG Cloud Provider. sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. stage/beta Denotes an issue tracking an enhancement targeted for Beta status tracked/no Denotes an enhancement issue is NOT actively being tracked by the Release Team

Comments

@errordeveloper
Copy link
Member

errordeveloper commented Sep 12, 2016

Feature Description:
Support out-of-tree and out-of-process cloud providers, a.k.a pluggable cloud providers.

Feature Progress:
In order to complete this feature, cloud provider dependencies need to be moved out the the following Kubernetes binaries, then docs and tests need to be added. The Links to the right hand side of the binary denote the PRs that lead to the completion of the sub-feature

  1. Kube-controller-manager -
  1. Kubelet
  1. Docs
  1. Tests
e2e Tests - Incomplete

The cloud-specific functionality of the above features needs to be moved into a new binary called cloud-controller-manager that support a plugin architecture.

Primary Contact: @wlan0

Responsible SIG: @k8s-mirror-cluster-lifecycle-feature-re

Design Proposal Link: kubernetes/community#128

Reviewers:
@luxas
@roberthbailey
@thockin

Approver:
@thockin

Feature Target:
Alpha: 1.7
Beta: 1.8
Stable: 1.10


Here's an updated status report for this feature, please let me know if anything needs clarification:

Beta (starting v1.11)

  • The common interface used by cloud providers has been well tested and support will not be dropped, though implementation details may change. Any methods that are deprecated should follow the Kubernetes Deprecation Policy.
  • The cloud controller manager has been tested by various cloud providers and is considered safe to use for out-of-tree providers. Features to be deprecated that are part of the cloud controller manager (controllers, component flags, etc) will follow the Kubernetes Deprecation Policy.
  • The cloud controller manager does not run in any cluster by default. It must be explicitly turned on and added like any other control plane component. Instructions for setup may slightly vary per cloud provider. More details here.

Reasoning for Graduation

There were a few things on our TODO list that we wanted to get done before graduating to beta such as collecting E2E tests from all providers & improving out-of-tree storage. However, many of these initiatives require collaboration from external parties that was delaying progress on this effort. In addition, there was uncertainty since we do not develop some of the components we rely on, a good example is whether CSI would be able to meet demands for out-of-tree storage that was on par with in-tree storage support. Though in hindsight we have more confidence in CSI, prior to its beta release it was unclear if it would meet our requirements. With this context in mind, we had decided to graduate to beta because:

  • blocking out-of-tree cloud providers from going beta meant that less in-tree providers will adopt this feature.
  • some goals (like E2E tests from cloud providers) requires a significant amount of collaboration and may unnecessarily block progress for many releases.
  • features that are lacking from the cloud controller manager (mainly storage) would be handled by future projects from other SIGs (e.g. CSI by SIG Storage).

Goals for GA (targetted for v1.13/v1.14)

  • Frequently collect E2E tests results from all in-tree & out-of-tree cloud providers SIG Cloud Provider KEP: Reporting Conformance Test Results to Testgrid community#2224
  • Cloud Provider Documentation includes:
    • “Getting Started” documentation - outlines the necessary steps required to stand up a Kubernetes cluster.
    • Documentation outlining all cloud provider features such as LoadBalancers, Volumes, etc. There should be docs providing a high-level overview and docs that dig into sufficient details on how each feature works under the hood.
    • Docs should also be centralized in an automated fashion where documentation from all cloud providers are placed into a central location (ideally https://kubernetes.io/docs/home/).
  • A well-documented plan exists for how to migrate a cluster from using in-tree cloud provider to out-of-tree cloud provider, this only applies to AWS, Azure, GCP, OpenStack, and VMWare.
  • All current cloud providers have implemented an out-of-tree solution, deprecation of in-tree code is preferred but not a requirement.
@colemickens
Copy link

Benefits:

  • Easier configuration for providers like Azure that require a "cloud config" flag on kubelet/kcm. This file could instead by made a Secret (or ConfigMap + Secret). Makes bootstrapping easier and would eliminate the need for kubeadm to have special functionality for handling the cloudprovider flags.
  • Selectively enablement. Some people want to run their own overlay network, but still want auto-provisioned L4 load balancers. There's no way to do that today.
  • Moves more things out of core Kubernetes repo/project, and enables faster turn-around for shipping new cloudproviders or iterating/testing changes.

Just a note, kubelet uses cloudprovider too, in addition to KCM.

@errordeveloper
Copy link
Member Author

cc @kubernetes/sig-cluster-lifecycle @kubernetes/sig-network @kubernetes/sig-storage @kubernetes/sig-aws @kubernetes/sig-openstack

@thockin
Copy link
Member

thockin commented Sep 15, 2016

I endorse this idea in general. I think the built-in cloud provider logic has served its purpose and its time to modularize. I think there are a number of facets to this that we have to work out including but not limited to:

  • CloudProvider and all the APIs therein
  • Volume drivers and provisioner support
  • Cluster turnup support

I think it would be worthwhile to start building a doc that details these and explores options for ejecting each one. I don't think there's anything here that hasn't been considered at SOME point. Once we get that written down, we can craft a roadmap...

@idvoretskyi
Copy link
Member

Sounds useful. @errordeveloper, do we have any mailing lists or GitHub discussions with this question what we can refer to?

@errordeveloper
Copy link
Member Author

@idvoretskyi not yet, this is probably very much on the radar of @kubernetes/sig-cluster-lifecycle.

@thockin
Copy link
Member

thockin commented Sep 21, 2016

I don't know that anyone is working on speccing this. It touches on a few
SIGs, but it is not exactly any of them.

On Tue, Sep 20, 2016 at 5:35 AM, Ilya Dmitrichenko <[email protected]

wrote:

@idvoretskyi https://github.com/idvoretskyi not yet, this is probably
very much on the radar of @kubernetes/sig-cluster-lifecycle
https://github.com/orgs/kubernetes/teams/sig-cluster-lifecycle.


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
#88 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AFVgVJJcJXb3gnueygHKf_uNVH7GkQ2nks5qr9MLgaJpZM4J6QkU
.

@errordeveloper
Copy link
Member Author

@thockin you are right. May be we should form sig-cloud?

@thockin
Copy link
Member

thockin commented Sep 21, 2016

so. many. sigs. I don't think we need a SIG for this. I doubt if it is
going to garner much resistance. There are just a lot of details to hammer
out. Being on the radar for lifecycle is fine. The hardest part here is
balancing the desire for modularity with the need for simplicity. That's
what I want to see explored :)

On Wed, Sep 21, 2016 at 12:07 AM, Ilya Dmitrichenko <
[email protected]> wrote:

@thockin https://github.com/thockin you are right. May be we should
form sig-cloud?


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#88 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AFVgVA1M1spBOe4hF4Wsgy-YQsvppxUeks5qsNeygaJpZM4J6QkU
.

@idvoretskyi
Copy link
Member

@errordeveloper no need in yet another SIG (SIG-Cloud sounds like an abstract and umbrella one). I agree with @thockin - the primary SIG has to be @kubernetes/sig-cluster-lifecycle; while on behalf of @kubernetes/sig-openstack I'm going to track this item.

Hope other cloud-SIG's will be involved in the process as well.

@errordeveloper
Copy link
Member Author

@justinsb and I have discussed this on Slack, and looks like we may be able to get closer to getting similar user-facing value by exposing flags via component config. It also turns out --configure-cloud-routes is already there. It doesn't look like this should involve moving code as such.

@bboreham
Copy link

I think there is additional value to moving code out to add-ons: it will enable further cloud providers to be added without enlarging the core of Kubernetes.

Example: kubernetes/kubernetes#32419

@errordeveloper
Copy link
Member Author

Ah, but it also looks like someone is working on this:
kubernetes/kubernetes#32419 (comment).

On Wed, 28 Sep 2016, 18:19 Bryan Boreham, [email protected] wrote:

I think there is additional value to moving code out to add-ons: it will
enable further cloud providers to be added without enlarging the core of
Kubernetes.

Example: kubernetes/kubernetes#32419
kubernetes/kubernetes#32419


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#88 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAPWS7hzMWW-U6R-KrehxlYilUUXAIqnks5quqGkgaJpZM4J6QkU
.

@thockin
Copy link
Member

thockin commented Sep 28, 2016

That does not read as someone working on it, to me. This is a big problem
with a lot of facets, and it needs a capital-O Owner.

On Wed, Sep 28, 2016 at 10:27 AM, Ilya Dmitrichenko <
[email protected]> wrote:

Ah, but it also looks like someone is working on this:
kubernetes/kubernetes#32419 (comment)
.

On Wed, 28 Sep 2016, 18:19 Bryan Boreham, [email protected]
wrote:

I think there is additional value to moving code out to add-ons: it will
enable further cloud providers to be added without enlarging the core of
Kubernetes.

Example: kubernetes/kubernetes#32419
kubernetes/kubernetes#32419


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#88 (comment)
,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAPWS7hzMWW-U6R-
KrehxlYilUUXAIqnks5quqGkgaJpZM4J6QkU>
.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#88 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AFVgVHiMNL34Eyg1T0vaRtIvQ065ssuTks5quqOKgaJpZM4J6QkU
.

@ibuildthecloud
Copy link

@thockin In reference to kubernetes/kubernetes#32419, Rancher would be up for being a guinea pig for this. @wlan0 will be working on this and if the scope is massive we will see if we can pull in more resources. I want to see if understand the approach you were proposing in kubernetes/kubernetes#32419 and see if we are on the same page.

What we would do is implement the existing cloudprovider.Interface with a new cloud provider called "external". Ideally we wouldn't change the existing Interface, but if we hit some oddities it might make sense to modify it. This new external implementation will not delegate via a plugin model but instead through k8s resources and expect one to write controllers. Upfront it seems like we would need some new resources like CloudProviderLoadBalancer, Instance, Zone, Cluster, Route. A new cloud provider would need to be a controller that interacted with these resources.

That all seems pretty straight forward to me. Now the weird part is volume plugins. While it's not part of the CloudProvider interface, there seems to be a back channel relationship between volume plugins and cloud providers. To decouple those I'd have to spend a bit more time researching.

@thockin Is this the basic approach you were thinking?

@thockin
Copy link
Member

thockin commented Sep 30, 2016

I replied to @wlan0, but for the record...

simpler.

My "external" suggestion was more about designating that we are not using a
built-in and any controller loops that use CloudProvider should be
disabled. "" may be just as viable.

Once the built-in controllers are nullified, we run a cloud-specific
controller manager. I propose that the starting point LITERALLY be a fork
of the kube-controller-manager code. But instead of linking in 8
CloudProviders and switching on a flag, just link one. Simplify and
streamline.

One possible result is a library pkg that accepts a type CloudProvider interface. In doing this, I am sure you will find things that need
restructuring or that are significantly harder this way, and that is when
we should discuss design.

I would suggest leaving volumes for last :)

On Thu, Sep 29, 2016 at 2:37 PM, Darren Shepherd [email protected]
wrote:

@thockin https://github.com/thockin In reference to
kubernetes/kubernetes#32419
kubernetes/kubernetes#32419, Rancher would be
up for being a guinea pig for this. @wlan0 https://github.com/wlan0
will be working on this and if the scope is massive we will see if we can
pull in more resources. I want to see if understand the approach you were
proposing in kubernetes/kubernetes#32419
kubernetes/kubernetes#32419 and see if we are
on the same page.

What we would do is implement the existing cloudprovider.Interface with a
new cloud provider called "external". Ideally we wouldn't change the
existing Interface, but if we hit some oddities it might make sense to
modify it. This new external implementation will not delegate via a plugin
model but instead through k8s resources and expect one to write
controllers. Upfront it seems like we would need some new resources like
CloudProviderLoadBalancer, Instance, Zone, Cluster, Route. A new cloud
provider would need to be a controller that interacted with these resources.

That all seems pretty straight forward to me. Now the weird part is volume
plugins. While it's not part of the CloudProvider interface, there seems to
be a back channel relationship between volume plugins and cloud providers.
To decouple those I'd have to spend a bit more time researching.

@thockin https://github.com/thockin Is this the basic approach you were
thinking?


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#88 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AFVgVEdVlIPVVXuLrBE-NameYsUfMHULks5qvC98gaJpZM4J6QkU
.

@colemickens
Copy link

Why is one linked in at all? What is the difference between --cloud-provider=external and simply not specifying the --cloud-provider at all? Then the Service/Routes are established with standalone addon controllers?

Or maybe we're on the same page? And then you're proposing a generic implementation of these standalone controllers (for now?) that can take the existing CloudProvider interface to preserve existing functionality?

@thockin
Copy link
Member

thockin commented Sep 30, 2016

On Fri, Sep 30, 2016 at 12:16 AM, Cole Mickens [email protected] wrote:

Why is one linked in at all? What is the difference between --cloud-provider=external and simply not specifying the --cloud-provider at all? Then the Service/Routes are established with standalone addon controllers?

Without inspecting, I don't know if "" disables the controllers today,
so I didn't want to break compat during the transition. That's all.
If "" works, that is simpler.

Or maybe we're on the same page? And then you're proposing a generic implementation of these standalone controllers (for now?) that can take the existing CloudProvider interface to preserve existing functionality?

I think same page. As a starting point, we would decompose the single
{kube-controller-manager (KCM) + 8 CloudProviders} into 8 * {KCM + 1
CloudProvider}. At that point, each cloud-controller could diverge if
they want to, or we could keep maintaining the cloud controller
manager as a library.

@alena1108
Copy link

So the controller manager that embeds certain control loops. Some of these loops are cloud provider specific:

  • nodeController
  • volumeController
  • routeController
  • serviceController

but most are provider agnostic:

  • replicationController
  • endpointController
  • resourcequotacontroller
  • namespacecontroller
  • deploymentController
    etc

I wonder if it would make sense to split the controller into 2 parts: base-controller (k8s code base). And provider-specific-controller (external repo, deployed by the user by choice). This way it would be more similar to the ingress controller path with the only slight difference: controller loops should be maintained as a library as all the providers will share them. Only implementation - attach/detachDisk/etc - will be provider specific. To make it backwards compatible, we can disble initializing cloud provider specific controllers in the current controller-manager code if the provider is passed as empty on the kubernetes start.

Or may be I'm just stating what you've already meant by "keep maintaining the cloud controller
manager as a library" @thockin

@thockin
Copy link
Member

thockin commented Oct 1, 2016

I think we are saying the same thing. kube-controller-manager will still
exist after this, but it will eventually get rid of all the cloud-specific
stuff. All the cloud-stuff will move to per-cloud controller binaries.

I would leave volumes for VERY LAST :)

On Fri, Sep 30, 2016 at 4:10 PM, Alena Prokharchyk <[email protected]

wrote:

So the controller manager that embeds certain control loops. Some of these
loops are cloud provider specific:

  • nodeController
  • volumeController
  • routeController
  • serviceController

but most are provider agnostic:

  • replicationController
  • endpointController
  • resourcequotacontroller
  • namespacecontroller
  • deploymentController etc

I wonder if it would make sense to split the controller into 2 parts:
base-controller (k8s code base). And provider-specific-controller (external
repo, deployed by the user by choice). This way it would be more similar to
the ingress controller path with the only slight difference: controller
loops should be maintained as a library as all the providers will share
them. Only implementation - attach/detachDisk/etc - will be provider
specific. To make it backwards compatible, we can disble initializing cloud
provider specific controllers in the current controller-manager code if the
provider is passed as empty on the kubernetes start.

Or may be I'm just stating what you've already meant by "keep maintaining
the cloud controller
manager as a library" @thockin https://github.com/thockin


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#88 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AFVgVEIPxqm6_wbc8Ss0QZuONiwM2IrNks5qvZbWgaJpZM4J6QkU
.

wlan0 pushed a commit to wlan0/kubernetes that referenced this issue Oct 6, 2016
Addresses: kubernetes/enhancements#88

This commit starts breaking the controller manager into two pieces, namely,

1. cloudprovider dependent piece
2. coudprovider agnostic piece

the controller manager has the following control loops -

 - nodeController
 - volumeController
 - routeController
 - serviceController
 - replicationController
 - endpointController
 - resourcequotacontroller
 - namespacecontroller
 - deploymentController etc..

among the above controller loops,

 - nodeController
 - volumeController
 - routeController
 - serviceController

are cloud provider dependent. As kubernetes has evolved tremendously, it has become difficult
for different cloudproviders (currently 8), to make changes and iterate quickly. Moreover, the
cloudproviders are constrained by the kubernetes build/release lifecycle. This commit is the first
step in moving towards a kubernetes code base where cloud providers specific code will move out of
the core repository, and will be maintained by the cloud providers themselves.

I have added a new cloud provider called "external", which signals the controller-manager that
cloud provider specific loops are being run by another controller. I have added these changes in such
a way that the existing cloud providers are not affected. This change is completely backwards compatible, and
does not require any changes to the way kubernetes is run today.

Finally, along with the controller-manager, the kubelet also has cloud-provider specific code, and that will
be addressed in a different commit/issue.
@dims
Copy link
Member

dims commented Oct 31, 2016

Hi @thockin Is this the one you mentioned to me in the hall way conversation in Barcelona? Just making sure!

@luxas
Copy link
Member

luxas commented Oct 31, 2016

Is this planned for v1.6 or what's the plan?
@wlan @ibuildthecloud @alena1108

@thockin
Copy link
Member

thockin commented Oct 31, 2016

@dims yeah, this is the one.

On Mon, Oct 31, 2016 at 8:17 PM, Davanum Srinivas [email protected]
wrote:

Hi @thockin https://github.com/thockin Is this the one you mentioned to
me in the hall way conversation in Barcelona? Just making sure!


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#88 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AFVgVAYCPv8n1JV8TNOunXBr0z8KR-SBks5q5j7ggaJpZM4J6QkU
.

@idvoretskyi idvoretskyi added this to the next-milestone milestone Nov 3, 2016
@andrewsykim
Copy link
Member

This feature will remain beta in 1.13. In Q4 we're planning to break this feature down a bit because the "out-of-tree" cloud provider model has a lot of different components and not all the pieces need to be feature gated. We're hoping to have final consensus on this at KubeCON Seattle.

@andrewsykim
Copy link
Member

andrewsykim commented Dec 4, 2018

We're going to have more discussions around this during KubeCON, but my sentiment for this enhancement at the moment is that it would be more effective if we created enhancements per in-tree provider (e.g. #631 & kubernetes/kubernetes#50752) and close this one out. The reasoning is that we've already built the functionality to support out-of-tree providers and need to have more providers executing. This catch-all enhancement makes it difficult to track the progress of each provider going forward (some providers may already support "beta" level out-of-tree cloud providers while others have not even started).

@kacole2
Copy link

kacole2 commented Dec 4, 2018

@andrewsykim what if it was renamed "deprecate in-tree cloud providers"? Or something along the lines of removing all of them. I agree that every vendor will be working on their own implementation, but not sure if they are going to take the time to deprecate nor remove the code from the core. thoughts?

@andrewsykim
Copy link
Member

Yes, removal is definitely on our roadmap! I would still be in favor of a new enhancement for that as this issue is pretty cluttered as it is. What do you think?

@andrewsykim
Copy link
Member

On second thought, removal of in-tree providers may also depend on the progress of each provider so might be good to keep that close to each cloud provider's out-of-tree enhancement issue.

@kacole2
Copy link

kacole2 commented Dec 4, 2018

I believe a tracking issue should be made for every out of tree provider that also has a native in-cloud provider. Beyond that, it's a larger discussion to see if a tracking issue needs to be created for "every" provider since it's an out-of-tree driver and won't effect anything in k/k.

@luxas
Copy link
Member

luxas commented Dec 5, 2018

I believe a tracking issue should be made for every out of tree provider that also has a native in-cloud provider. Beyond that, it's a larger discussion to see if a tracking issue needs to be created for "every" provider since it's an out-of-tree driver and won't effect anything in k/k.

Totally agree with this statement, and @andrewsykim's point as well. As this is going kinda well now from an architectural perspective (it's possible to move out in practice now), it makes sense to ONLY track the individual in-tree -> out-of-tree migrations we have, and once they provider is totally out of the core k8s tree, it will not be tracked here anymore but have its own release cycle and all that. With that agree with the previous saying and am saying we should not add e.g. a "Kubernetes enhancement issue" here for an all-out-of-core (from the start) cloud provider like Digital Ocean.

@justaugustus
Copy link
Member

justaugustus commented Dec 5, 2018

+1 to having separate enhancement tracking issues (and associated KEPs) for each provider.

@andrewsykim
Copy link
Member

Broke up this issue to be per in-tree cloud provider:

AWS: #631
Azure: #667
CloudStack: #672 (needs OWNER)
GCE: #668
IBM: #671
OpenStack: #669
oVirt: #673 (needs OWNER)
Photon: #674 (needs OWNER)
vSphere: #670

Will close this issue next week if there are no objections.

@kacole2
Copy link

kacole2 commented Jan 2, 2019

@andrewsykim nice work! I believe Photon may be in a limbo state. @dougm or @frapposelli can you comment on that?

@dougm
Copy link
Member

dougm commented Jan 3, 2019

CloudStack, oVirt and Photon are in the process of being removed for 1.14: kubernetes/kubernetes#72178

@krmayankk
Copy link

What is tracking the fact that once the in tree cloud providers are removed , the external cloud controller manager has to be by default installed as an add on when installing on gcp/ aws otherwise things like getting a service with load balancer won’t work

@kacole2
Copy link

kacole2 commented Jan 3, 2019

@krmayankk installation and configuration of individual cloud providers will have to be done on a per-use case basis. Each cloud provider currently has their implementation in their respective SIGs. Documentation will be required for each cloud provider. I don't foresee any tracking beyond that since the vendor is responsible for making it consumer friendly.

@claurence
Copy link

@wlan0 Hello - I’m the enhancement’s lead for 1.14 and I’m checking in on this issue to see what work (if any) is being planned for the 1.14 release. Enhancements freeze is Jan 29th and I want to remind that all enhancements must have a KEP

@andrewsykim
Copy link
Member

andrewsykim commented Jan 16, 2019

@claurence thanks for the ping! We're going to be closing this issue in the next few days in favor of having an enhancement issue per cloud provider (see #88 (comment)). I will follow up with leads for each provider to update their enhancement issues (and respective KEPs if there aren't any).

@andrewsykim
Copy link
Member

/close

@k8s-ci-robot
Copy link
Contributor

@andrewsykim: Closing this issue.

In response to this:

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

ingvagabund pushed a commit to ingvagabund/enhancements that referenced this issue Apr 2, 2020
Add audit logs to cluster-logging-log-forwarding enhancement proposal
howardjohn pushed a commit to howardjohn/enhancements that referenced this issue Oct 21, 2022
* Add a unique identifier to each fature

* Change IDs to strings

* Update feature IDs
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. sig/cloud-provider Categorizes an issue or PR as relevant to SIG Cloud Provider. sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. stage/beta Denotes an issue tracking an enhancement targeted for Beta status tracked/no Denotes an enhancement issue is NOT actively being tracked by the Release Team
Projects
None yet
Development

No branches or pull requests