-
Notifications
You must be signed in to change notification settings - Fork 2.1k
WIP: ci-operator/step-registry/ipi/conf/workload: Synthetic workload step #15674
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: wking The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
ab8e77a to
797ae8b
Compare
what kind of tooling is missing to apply manifests day 2? the steps have access to oc using cli image, and KUBECONFIG for the cluster using some shared env variables. would be good enough right? |
797ae8b to
e2f721f
Compare
…step Make it easier to turn up issues in CI that show up in the CI clusters. Those clusters are mostly full of CI jobs with moderate CPU load and PodDisruptionBudgets that protect them from being evicted. They run for up to 4 hours before being terminated, and have a 30 minute termination grace period on top of that. We obviously can't use workload that's that slow to drain in a CI job, or our CI job would overshoot the limit and be killed. In this commit, I'm adding a new step (linked up just to the AWS update workflow for now) to install a deployment that asks for 100m of CPU but then (I think) consumes as much CPU as is available. It would be awesome if there was a test widget in some shipped container like tools that could be configured to consume a particular amount of CPU and memory, although I guess it would be hard to parameterize "regular" memory access. Anyhow, this is a first-pass WIP to feel out this general approach. The manifest will subsequently be picked up and fed to the installer in the ipi-install-install step, or one of its close relatives. We'd be fine installing this as a day-2 manifest as well, but we don't have tooling in place for that yet, and installing it via the installer gives it more time to roll out into the compute nodes before the test step rolls around.
e2f721f to
83a3c4d
Compare
|
@wking: The following tests failed, say
Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
|
Perf folks are going to handle CI for this use-case, and I don't have time to figure out why my approach isn't working ;) /close |
|
@wking: Closed this PR. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Make it easier to turn up issues in CI that show up in the CI clusters. Those clusters are mostly full of CI jobs with moderate CPU load and PodDisruptionBudgets that protect them from being evicted. They run for up to 4 hours before being terminated, and have a 30 minute termination grace period on top of that. We obviously can't use workload that's that slow to drain in a CI job, or our CI job would overshoot the limit and be killed. In this commit, I'm adding a new step (linked up just to the AWS update workflow for now) to install a deployment that asks for 100m of CPU but then (I think) consumes as much CPU as is available. It would be awesome if there was a test widget in some shipped container like tools that could be configured to consume a particular amount of CPU and memory, although I guess it would be hard to parameterize "regular" memory access. Anyhow, this is a first-pass WIP to feel out this general approach.
The manifest will subsequently be picked up and fed to the installer in the ipi-install-install step, or one of its close relatives. We'd be fine installing this as a day-2 manifest as well, but we don't have tooling in place for that yet, and installing it via the installer gives it more time to roll out into the compute nodes before the test step rolls around.