Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KEP-5901: Add Kubectl Checkpoint KEP #5092

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

adrianreber
Copy link
Contributor

With "Forensic Container Checkpointing" being Beta and discussions around graduating it to GA, the next step would be kubectl integration of the container checkpointing functionality.

In addition to the "Forensic Container Checkpointing" use case this KEP lists multiple use cases how checkpointing containers can be used.

One of the main motivations for this KEP is to make it easier for users to checkpoint containers, independent of the reason. Having it available via kubectl reduces the complexity of connecting to the node and accessing the kubectl checkpoint API endpoint.

  • One-line PR description: adding new KEP

With "Forensic Container Checkpointing" being Beta and discussions
around graduating it to GA, the next step would be kubectl integration
of the container checkpointing functionality.

In addition to the "Forensic Container Checkpointing" use case this KEP
lists multiple use cases how checkpointing containers can be used.

One of the main motivations for this KEP is to make it easier for users
to checkpoint containers, independent of the reason. Having it available
via kubectl reduces the complexity of connecting to the node and
accessing the kubectl checkpoint API endpoint.

Signed-off-by: Adrian Reber <[email protected]>
@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory sig/cli Categorizes an issue or PR as relevant to SIG CLI. labels Jan 27, 2025
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: adrianreber
Once this PR has been reviewed and has the lgtm label, please assign ardaguclu for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Jan 27, 2025
@k8s-ci-robot
Copy link
Contributor

Hi @adrianreber. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. label Jan 27, 2025
@adrianreber adrianreber mentioned this pull request Jan 27, 2025
6 tasks
Comment on lines +195 to +198
Beta in Kubernetes 1.30, which means that the corresponding feature gate
defaults to the feature being enabled, the next step would be to extend the
existing checkpointing functionality from the *kubelet* to *kubectl* for easier
user consumption. The main motivation is to make it easier by not requiring
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not familiar with the criteria followed for kubectl commands, but should not wait first for the feature to be GA so it is available? or is the plan to GA KEP-2008 and add the kubectl command at the same time?

Currently the design details are based on the existing pull request: [Add
'checkpoint' command to kubectl][pr120898]

The API server is extended to handle checkpoint requests from *kubectl*:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the most important change , adding a new endpoint to the apiserver is where you need to expand, Jordan also commented here kubernetes/kubernetes#120898 (comment) along those lines

for the initialization to finish. The startup time is reduced to the time
necessary to read back all memory pages to their previous location.

This feature is already used in production to decrease startup time of
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

claims should have links to references

This feature is already used in production to decrease startup time of
containers.

Another similar use case for quicker starting containers has been reported in
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reference?

#### Optimize Resource Utilization

This use case is motivated by interactive long running containers. One very
common problem with things like Jupyter notebooks or remote development
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

naive questions, is this state not stored in a database or some persistent storage and recovered when it reconnects? at the end you'll have to keep all the state stored somewhere


#### Container Migration

One of the main use cases for checkpointing and restoring containers is
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

migration between nodes? the IPs are most likely to be lost so the application has to be agnostic of the IP per example

migrate containers or processes. It is a well researched topic especially
in the field of high performance computing (HPC). To avoid loss of work
already done by a container the container is migrated to another node before
the current node crashes. There are many scientific papers describing how
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

one or two references to this papers will be nice

case, only useful for stateful containers.

With GPUs becoming a costly commodity, there is an opportunity to help
users save on costs by leveraging container checkpointing to prevent
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Workloads are already doing checkpointing , do you know what is the state of the art of existing checkpointing mechanisms vs container checkpointing?

Container migration for load balancing is something where checkpoint/restore
as implemented by CRIU is already used in production today. A prominent example
is Google as presented at the Linux Plumbers conference in 2018:
[Task Migration at Scale Using CRIU]<[task-migration]>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The example says that connections are dropped and client must reconnect, this is well understood at google where are librarie and applications that handle the client side reconnection, but my observation is that most people expect to auto-magically reconnect, and AFAIK this will not do it

##### Spot Instances

Yet another possible use case where checkpoint/restore is already used today
are spot instances. Spot instances are usually resources that are cheaper but
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will need to take into account the time you have for checkpointing, as spot is like that, eventually you'll get destroyed

Comment on lines +487 to +491
Also, *kubectl* is extended to call this new API server interface. The API
server, upon receiving a request, will call the kubelet with the corresponding
parameters passed from *kubectl*. Once the checkpoint has been successfully written
to disk *kubectl* will return the name of the node as well as the location of
the checkpoint archive to the user:
Copy link
Member

@aojea aojea Feb 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as commented above, this is the most tricky part of the KEP, you need to expand on the technical design here, these endpoints are complex to implement also you need to play with version skews between apiserver, kubelet and container runtimes

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. sig/cli Categorizes an issue or PR as relevant to SIG CLI. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.
Projects
Status: Needs Triage
Development

Successfully merging this pull request may close these issues.

3 participants