diff --git a/keps/sig-api-machinery/3158-optional-maximum-limit-on-list-requests-to-etcd/README.md b/keps/sig-api-machinery/3158-optional-maximum-limit-on-list-requests-to-etcd/README.md new file mode 100644 index 000000000000..a2806c3bd15a --- /dev/null +++ b/keps/sig-api-machinery/3158-optional-maximum-limit-on-list-requests-to-etcd/README.md @@ -0,0 +1,826 @@ +# KEP-3158: Optional maximum etcd page size on every list requests + + +- [Release Signoff Checklist](#release-signoff-checklist) +- [Summary](#summary) +- [Motivation](#motivation) + - [Goals](#goals) + - [Non-Goals](#non-goals) +- [Proposal](#proposal) + - [Desired outcome](#desired-outcome) + - [How do we measure success](#how-do-we-measure-success) + - [User Stories (Optional)](#user-stories-optional) + - [Story 1](#story-1) + - [Story 2](#story-2) + - [Notes/Constraints/Caveats (Optional)](#notesconstraintscaveats-optional) + - [Risks and Mitigations](#risks-and-mitigations) +- [Design Details](#design-details) + - [Setting appropriate maximum limit](#setting-appropriate-maximum-limit) + - [e2e testing](#e2e-testing) + - [workload I (2k pods)](#workload-i-2k-pods) + - [workload II (10k pods)](#workload-ii-10k-pods) + - [workload III (100k pods)](#workload-iii-100k-pods) + - [Test Plan](#test-plan) + - [Graduation Criteria](#graduation-criteria) + - [Alpha](#alpha) + - [Upgrade / Downgrade Strategy](#upgrade--downgrade-strategy) + - [Version Skew Strategy](#version-skew-strategy) +- [Production Readiness Review Questionnaire](#production-readiness-review-questionnaire) + - [Feature Enablement and Rollback](#feature-enablement-and-rollback) + - [Rollout, Upgrade and Rollback Planning](#rollout-upgrade-and-rollback-planning) + - [Monitoring Requirements](#monitoring-requirements) + - [Dependencies](#dependencies) + - [Scalability](#scalability) + - [Troubleshooting](#troubleshooting) +- [Implementation History](#implementation-history) +- [Drawbacks](#drawbacks) +- [Alternatives](#alternatives) +- [Infrastructure Needed (Optional)](#infrastructure-needed-optional) + + +## Release Signoff Checklist + + + +Items marked with (R) are required *prior to targeting to a milestone / release*. + +- [ ] (R) Enhancement issue in release milestone, which links to KEP dir in [kubernetes/enhancements] (not the initial KEP PR) +- [ ] (R) KEP approvers have approved the KEP status as `implementable` +- [ ] (R) Design details are appropriately documented +- [ ] (R) Test plan is in place, giving consideration to SIG Architecture and SIG Testing input (including test refactors) + - [ ] e2e Tests for all Beta API Operations (endpoints) + - [ ] (R) Ensure GA e2e tests for meet requirements for [Conformance Tests](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md) + - [ ] (R) Minimum Two Week Window for GA e2e tests to prove flake free +- [ ] (R) Graduation criteria is in place + - [ ] (R) [all GA Endpoints](https://github.com/kubernetes/community/pull/1806) must be hit by [Conformance Tests](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md) +- [ ] (R) Production readiness review completed +- [ ] (R) Production readiness review approved +- [ ] "Implementation History" section is up-to-date for milestone +- [ ] User-facing documentation has been created in [kubernetes/website], for publication to [kubernetes.io] +- [ ] Supporting documentation—e.g., additional design documents, links to mailing list discussions/SIG meetings, relevant PRs/issues, release notes + + + +[kubernetes.io]: https://kubernetes.io/ +[kubernetes/enhancements]: https://git.k8s.io/enhancements +[kubernetes/kubernetes]: https://git.k8s.io/kubernetes +[kubernetes/website]: https://git.k8s.io/website + +## Summary + + +Performing API queries that return all of the objects of a given resource type (GET /api/v1/pods, GET /api/v1/secrets) without pagination can lead to significant variations in peak memory use on etcd. + +This proposal covers an approach where the kube-apiserver makes multiple paginated list calls to etcd instead of a single large unpaginated call to reduce the peak memory usage of etcd, thereby reducing its chances of Out-of-memory crashes. In setups where etcd is colocated with kube-apiserver on the same instance, it can buy more memory for kube-apiserver. + +## Motivation + + + +Protect etcd from out of memory crashing and prevent its cascading unpredictable failures. kube-apiserver and other downstream components recovering like re-creating gRPC connections to etcd, rebuilding all caches and resuming internal reconciler can be expensive. Lower etcd RAM consumption at peak time to allow autoscaler to increase their memory budget in time. + +In Kubernetes, etcd only has a single client: kube-apiserver. It is feasible to control how requests are sent from kube-apiserver to etcd. + +[APIListChunking](https://kubernetes.io/docs/reference/using-api/api-concepts/#retrieving-large-results-sets-in-chunks) should be in effect in most kube-apiserver set up, `client-go/tools/pager` and `client-go/informers` are good libraries to use and they follows the pagination rule of thumb to reduce server load. + +However, there are still some native clients calling server without any limit to get a full list of resource. In addition, etcd buffers those request responses entirely in memory before sending the full set. It does not [stream the result](https://github.com/etcd-io/etcd/pull/12343#issuecomment-784008186) gracefully. This will create a lot of memory pressure on etcd. + + +### Goals + + + +- Reduce etcd memory consumption when kube clients list a large number of resources + +### Non-Goals + + + +- Reduce kube-apiserver memory usage for unpaginated list calls +- Implement etcd server side streaming serves kube-apiserver List +- Implement etcd QoS +- Reduce list call load on etcd using priority & fairness settings on the kube-apiserver + +## Proposal + + + +### Desired outcome +By default, this proposal is a no-op. +If the relevant feature gate is enabled and the `max-list-etcd-limit` command line argument to kube-apiserver is set to `x`, where `x` >= 500, then: +- kube-apiserver splits requests to etcd into multiple pages. The max etcd page size is `x`. If user provided the limit `y` is smaller or equal to `x`, those requests are intact. +- The returned response is identical to the one without split. + +### How do we measure success +If the relevant feature gate is enabled and `max-list-etcd-limit` flag on kube-apiserver is set to `x` where `x` >= 500 + +- The number of etcd OOM-killed incidents is reduced. +- etcd memory consumption is less than before. +- Only a small set of user list requests are split by kube-apiserver. +- For impacted user list requests, added latency about apiserver_list_from_storage_duration_p50/p90/p99 is within 5s +- apiserver_list_duration_p99 is within defined scalability SLO + + +### User Stories (Optional) + + + +#### Story 1 +A user deployed an application as daemonset that queries pods belonging to the node it is running. It translates to range query etcd from `/registry/pods/` to `/registry/pods0` and each such query payload is close to 200MiB. +```json +{ + "level": "warn", + "ts": "2022-03-31T23:00:06.771Z", + "caller": "etcdserver/util.go:163", + "msg": "apply request took too long", + "took": "111.479086ms", + "expected-duration": "100ms", + "prefix": "read-only range ", + "request": "key:\"/registry/pods/\" range_end:\"/registry/pods0\" ", + "response": "range_response_count:27893 size:191330794" +} +``` + +``` +I0331 23:00:06.844303 10 trace.go:205] Trace[484169231]: "List etcd3" key:/pods,resourceVersion:,resourceVersionMatch:,limit:0,continue: (31-Mar-2022 23:00:04.575) (total time: 2268ms): +Trace[484169231]: [2.268397203s] [2.268397203s] END + +I0331 23:00:06.848003 10 trace.go:205] Trace[638277422]: "List" url:/api/v1/namespaces//pods,user-agent:OpenAPI-Generator/12.0.1/python,client:10.13.20.49 (31-Mar-2022 23:00:04.575) (total time: 2272ms): +Trace[638277422]: ---"Listing from storage done" 2268ms (23:00:00.844) +Trace[638277422]: [2.272115626s] [2.272115626s] END +``` +10x clients issued such queries lead to etcd OOM-killed and failed other kube-apiserver requests hitting to that node. + +With the approach and `max-list-etcd-limit = 500`, user should expect +- kube-apiserver is healthy and serving requests because etcd cluster is healthy +- List etcd duration increases from 2 seconds around 3 seconds and this duration is available as a prometheus metric exposed in "/metrics" endpoint. +#### Story 2 + +### Notes/Constraints/Caveats (Optional) + + + +- Setting the maximum limit too low will increase the total round-trip time between kube-apiserver and etcd to breach the default 1 minute request time. This will cause kube-apiserver cancelles the requests prematurely. Example https://github.com/kubernetes/kubernetes/pull/94303. + +- Recommend users to paginate list calls to kube-apiserver to avoid running into the above situation in the first place. + +### Risks and Mitigations + + + +## Design Details + + + +The new default behavior will continue to be the same as before when the flag is unspecified, i.e no max page-size will be imposed on the list calls to etcd + +Modify [apiserver/blob/master/pkg/storage/etcd3/store.go](https://github.com/kubernetes/apiserver/blob/master/pkg/storage/etcd3/store.go#L565-L576) list to apply a maximum limit to every etcd range request and issue multiple range requests if necessary to satisfy the original list. Filtering and decoding on each etcd response before issuing the next range request. Using the ResourceVersion that comes back from the first range request from etcd (or the one user provided) on later requests to ensure a consistent snapshot of resource collections. The implementation should unify with [current pagination implementation](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/pkg/storage/etcd3/store.go#L686) + +### Setting appropriate maximum limit + +Setting the limit is highly relevant to the cluster setup like etcd memory budget, resource object size, number of concurrent requests, etc. Theoretically etcd memory budget should be propotional to a combination of (M * maximum limit * object size * number of concurrent list requests) if it's a list dominating workload. M is a magic factor to be calculated based on the memory built up when serving multiple range pages and etcd Golang GC releasing memory to OS. The k8s control plane auto scaler should increase the maximum limit as the memory budget grows to reduce the etcd <-> kube-apiserver round trip time. + +However, this may be over complicated and it would be good to verify a heuristic value like `500`. + +### e2e testing +The test was performed on a patched EKS 1.21 kube-apiserver. The changes are https://github.com/kubernetes/kubernetes/compare/v1.21.9...chaochn47:v1.21.9-test-apiserver-paging + +Customized kube-apiserver command line arguments are +- `--default-maximum-list-etcd-limit` = `0`, `500`, `1000` +- `--request-timeout` = `5m` to measure kube-apiserver list latency on client side without context cancelled. + +#### workload I (2k pods) +``` +# 1.21 cluster +# 1 CPI with 96cpu, 384gb RAM +# 3 etcd with 2cpu, 8gb RAM + +# workload +# 2032 pods, pod object size 44 kiB, each list pods payload is 86.6MiB +# list concurrency 50 +``` +* max of etcd process mem_used_percent and mem_used_bytes + * dropped from 97.76% to 26%, from 8gb to 2.08gb + * ![](./list-2k-pods-etcd-mem-used-percent.png) + * left 5 waves is unlimited 5 times back to back testing, right 5 waves is setting --default-maximum-list-etcd-limit=500 +* p99 of apiserver list duration + * roughly the same + * ![](./list-2k-pods-apiserver-list-duration-seconds-p99.png) + + +#### workload II (10k pods) +``` +# 1.21 cluster +# 1 CPI with 96cpu, 384gb RAM +# 3 etcd with 2cpu, 8gb RAM + +# workload +# 10k pods, total pod object size 11kiB, each list payload is 44kiB * 10k ~= 440MiB +# list concurrency 1 +``` + +| 102k pods | --default-maximum-list-etcd-limit=500 | --default-maximum-list-etcd-limit=1000 | --default-maximum-list-etcd-limit=0 | +|-------------------------------------------|---------------------------------------|----------------------------------------|-------------------------------------| +| etcd_mem_used_percent | 7.7% | 9.2% | 32.3% | +| etcd_mem_used_bytes | 0.91G | 1.05G | 2.82G | +| apiserver_list_etcd3_duration_avg | 2.2s | 2.1s | 2.1s | +| apiserver_list_etcd3_duration_p90 | 2.4s | 2.3s | 2.3s | +| apiserver_list_etcd3_duration_p99 | 2.5s | 2.4s | 2.6s | +| apiserver_send_list_response_duration_avg | 27.4s | 29.8s | 28.4s | +| apiserver_send_list_response_duration_p90 | 29.2s | 32.2s | 29.8s | +| apiserver_send_list_response_duration_p99 | 29.9s | 32.8s | 31.3s | +| kube_client_list_duration_p50 | 36s | 38s | 37s | +| kube_client_list_duration_p90 | 37s | 40s | 38s | +| kube_client_list_duration_p99 | 38s | 41s | 38.5s | + + +#### workload III (100k pods) + +``` +# 1.21 cluster +# 1 CPI with 96cpu, 384gb RAM +# 3 etcd with 8cpu, 32gb RAM + +# workload +# 102k pods, total pod object size 11kiB, each list payload is 11kiB * 102k ~= 1.2GiB +# list concurrency 1 +``` + +| 102k pods | --default-maximum-list-etcd-limit=500 | --default-maximum-list-etcd-limit=1000 | --default-maximum-list-etcd-limit=0 | +|-------------------------------------------|---------------------------------------|----------------------------------------|-------------------------------------| +| etcd_mem_used_percent | 2.2% | 7.07% | 26.1% | +| etcd_mem_used_bytes | 1.92GiB | 2.52GiB | 8.86GiB | +| apiserver_list_etcd3_duration_avg | 12.3s | 10.5s | 7.5s | +| apiserver_list_etcd3_duration_p90 | 12.8s | 10.9s | 8.1s | +| apiserver_list_etcd3_duration_p99 | 13.5s | 12.1s | 8.5s | +| apiserver_send_list_response_duration_avg | 76.4s | 71.8s | 71.9s | +| apiserver_send_list_response_duration_p90 | 79.8s | 76.2s | 76.0s | +| apiserver_send_list_response_duration_p99 | 82.3s | 78.1s | 77.6s | +| kube_client_list_duration_p50 | 115s | 109s | 105s | +| kube_client_list_duration_p90 | 118s | 112s | 109s | +| kube_client_list_duration_p99 | 120s | 114s | 110s | + + + +### Test Plan + + + + + +### Graduation Criteria + + + +#### Alpha + +- Feature implemented behind a feature flag +- Initial e2e tests completed and enabled + +### Upgrade / Downgrade Strategy + + + +### Version Skew Strategy + + + +## Production Readiness Review Questionnaire + + + +### Feature Enablement and Rollback + + + +###### How can this feature be enabled / disabled in a live cluster? + + + +- [ ] Feature gate (also fill in values in `kep.yaml`) + - Feature gate name: `MaximumListLimitOnEtcd` + - Components depending on the feature gate: `kube-apiserver` +- [ ] Other + - Describe the mechanism: + - Will enabling / disabling the feature require downtime of the control + plane? + Yes, this requires restart of the apiserver instance. However, there shouldn't be a downtime in HA setups where there are at least one replica kept active at any given time during the update. + - Will enabling / disabling the feature require downtime or reprovisioning + of a node? (Do not assume `Dynamic Kubelet Config` feature is enabled). + +###### Does enabling the feature change any default behavior? + + + +###### Can the feature be disabled once it has been enabled (i.e. can we roll back the enablement)? + + + +###### What happens if we reenable the feature if it was previously rolled back? + +###### Are there any tests for feature enablement/disablement? + + + +### Rollout, Upgrade and Rollback Planning + + + +###### How can a rollout or rollback fail? Can it impact already running workloads? + + + +###### What specific metrics should inform a rollback? + + + +###### Were upgrade and rollback tested? Was the upgrade->downgrade->upgrade path tested? + + + +###### Is the rollout accompanied by any deprecations and/or removals of features, APIs, fields of API types, flags, etc.? + + + +### Monitoring Requirements + + + +###### How can an operator determine if the feature is in use by workloads? + + + +###### How can someone using this feature know that it is working for their instance? + + + +- [ ] Events + - Event Reason: +- [ ] API .status + - Condition name: + - Other field: +- [ ] Other (treat as last resort) + - Details: + +###### What are the reasonable SLOs (Service Level Objectives) for the enhancement? + + + +###### What are the SLIs (Service Level Indicators) an operator can use to determine the health of the service? + + + +- [ ] Metrics + - Metric name: + - [Optional] Aggregation method: + - Components exposing the metric: +- [ ] Other (treat as last resort) + - Details: + +###### Are there any missing metrics that would be useful to have to improve observability of this feature? + + + +### Dependencies + + + +###### Does this feature depend on any specific services running in the cluster? + + + +### Scalability + + + +###### Will enabling / using this feature result in any new API calls? + + + +###### Will enabling / using this feature result in introducing new API types? + + + +###### Will enabling / using this feature result in any new calls to the cloud provider? + + + +###### Will enabling / using this feature result in increasing size or count of the existing API objects? + + + +###### Will enabling / using this feature result in increasing time taken by any operations covered by existing SLIs/SLOs? + + + +###### Will enabling / using this feature result in non-negligible increase of resource usage (CPU, RAM, disk, IO, ...) in any components? + + + +### Troubleshooting + + + +###### How does this feature react if the API server and/or etcd is unavailable? + +###### What are other known failure modes? + + + +###### What steps should be taken if SLOs are not being met to determine the problem? + +## Implementation History + + + +## Drawbacks + + + +## Alternatives + + + +## Infrastructure Needed (Optional) + + diff --git a/keps/sig-api-machinery/3158-optional-maximum-limit-on-list-requests-to-etcd/kep.yaml b/keps/sig-api-machinery/3158-optional-maximum-limit-on-list-requests-to-etcd/kep.yaml new file mode 100644 index 000000000000..c7955ea60afc --- /dev/null +++ b/keps/sig-api-machinery/3158-optional-maximum-limit-on-list-requests-to-etcd/kep.yaml @@ -0,0 +1,54 @@ +title: Optional maximum limit on list requests to etcd +kep-number: 3158 +authors: + - "@chaochn47" +owning-sig: sig-api-machinery +participating-sigs: + - sig-scalability +status: provisional +creation-date: 2022-04-05 +reviewers: + - "@wojtek-t" + - "@shyamjvs" + - "@anguslees" + - "@ptabor" + - "@serathius" +approvers: + - "@deads2k" + - "@lavalamp" + +##### WARNING !!! ###### +# prr-approvers has been moved to its own location +# You should create your own in keps/prod-readiness +# Please make a copy of keps/prod-readiness/template/nnnn.yaml +# to keps/prod-readiness/sig-xxxxx/00000.yaml (replace with kep number) +#prr-approvers: + +see-also: +replaces: + +# The target maturity stage in the current dev cycle for this KEP. +stage: implementable + +# The most recent milestone for which work toward delivery of this KEP has been +# done. This can be the current (upcoming) milestone, if it is being actively +# worked on. +latest-milestone: "v1.25" + +# The milestone at which this feature was, or is targeted to be, at each stage. +milestone: + alpha: "v1.25" + beta: tbd + stable: tbd + +# The following PRR answers are required at alpha release +# List the feature gate name and the components for which it must be enabled +feature-gates: + - name: MaximumListLimitOnEtcd + components: + - kube-apiserver +disable-supported: true + +# The following PRR answers are required at beta release +metrics: + - tbd diff --git a/keps/sig-api-machinery/3158-optional-maximum-limit-on-list-requests-to-etcd/list-2k-pods-apiserver-list-duration-seconds-p99.png b/keps/sig-api-machinery/3158-optional-maximum-limit-on-list-requests-to-etcd/list-2k-pods-apiserver-list-duration-seconds-p99.png new file mode 100644 index 000000000000..fc5635b3c6ce Binary files /dev/null and b/keps/sig-api-machinery/3158-optional-maximum-limit-on-list-requests-to-etcd/list-2k-pods-apiserver-list-duration-seconds-p99.png differ diff --git a/keps/sig-api-machinery/3158-optional-maximum-limit-on-list-requests-to-etcd/list-2k-pods-etcd-mem-used-percent.png b/keps/sig-api-machinery/3158-optional-maximum-limit-on-list-requests-to-etcd/list-2k-pods-etcd-mem-used-percent.png new file mode 100644 index 000000000000..fde4faa0f18b Binary files /dev/null and b/keps/sig-api-machinery/3158-optional-maximum-limit-on-list-requests-to-etcd/list-2k-pods-etcd-mem-used-percent.png differ