@@ -23,8 +23,8 @@ approvers:
2323 - " @mwielgus"
2424editor : TBD
2525creation-date : 2018-11-06
26- last-updated : 2019-10-25
27- status : implementable
26+ last-updated : 2018-11-06
27+ status : provisional
2828see-also :
2929replaces :
3030superseded-by :
@@ -49,7 +49,6 @@ superseded-by:
4949 * [ Scheduler and API Server interaction] ( #scheduler-and-api-server-interaction )
5050 * [ Flow Control] ( #flow-control )
5151 * [ Container resource limit update ordering] ( #container-resource-limit-update-ordering )
52- * [ Container resource limit update failure handling] ( #container-resource-limit-update-failure-handling )
5352 * [ Notes] ( #notes )
5453 * [ Affected Components] ( #affected-components )
5554 * [ Future Enhancements] ( #future-enhancements )
@@ -168,12 +167,6 @@ Kubelet calls UpdateContainerResources CRI API which currently takes
168167but not for Windows. This parameter changes to * runtimeapi.ContainerResources* ,
169168that is runtime agnostic, and will contain platform-specific information.
170169
171- Additionally, GetContainerResources CRI API is introduced that allows Kubelet
172- to query currently configured CPU and memory limits for a container.
173-
174- These CRI changes are a separate effort that does not affect the design
175- proposed in this KEP.
176-
177170### Kubelet and API Server Interaction
178171
179172When a new Pod is created, Scheduler is responsible for selecting a suitable
@@ -290,16 +283,6 @@ updates resource limit for the Pod and its Containers in the following manner:
290283In all the above cases, Kubelet applies Container resource limit decreases
291284before applying limit increases.
292285
293- #### Container resource limit update failure handling
294-
295- If multiple Containers in a Pod are being updated, and UpdateContainerResources
296- CRI API fails for any of the containers, Kubelet will backoff and retry at a
297- later time. Kubelet does not attempt to update limits for containers that are
298- lined up for update after the failing container. This ensures that sum of the
299- container limits does not exceed Pod-level cgroup limit at any point. Once all
300- the container limits have been successfully updated, Kubelet updates the Pod's
301- Status.ContainerStatuses[ i] .Resources to match the desired limit values.
302-
303286#### Notes
304287
305288* If CPU Manager policy for a Node is set to 'static', then only integral
0 commit comments