-
Notifications
You must be signed in to change notification settings - Fork 66
Only contribute to resources defined in merge target #1129
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Only contribute to resources defined in merge target #1129
Conversation
Fix devfile#1056 Signed-off-by: Andrew Obuchowicz <[email protected]>
…contribution test cases Part of devfile#1056 Signed-off-by: Andrew Obuchowicz <[email protected]>
df84301 to
1b45279
Compare
Signed-off-by: Andrew Obuchowicz <[email protected]>
1b45279 to
480b131
Compare
Codecov ReportPatch coverage:
Additional details and impacted files@@ Coverage Diff @@
## main #1129 +/- ##
==========================================
- Coverage 51.60% 51.57% -0.04%
==========================================
Files 79 80 +1
Lines 7299 7271 -28
==========================================
- Hits 3767 3750 -17
+ Misses 3244 3238 -6
+ Partials 288 283 -5
☔ View full report in Codecov by Sentry. |
amisevsk
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am thinking out loud here but I am trying to think about any risk that this PR could introduce.
Not setting requests can have severe consequences. The scheduler cannot figure out how much memory and CPU will be used by Pods and is going to provision workloads on nodes that are out of memory/cpu.
The developer that writes the devfile is probably not going to specify a request at first. He will specify the image, maybe some env variables but the memory is something that we do not require and that will be missing from most devfiles.
This is a problem we have seen recently on a customer cluster. They had no clue why it was happening but workspaces got scheduled on a not that was out of resources and Dev Spaces had become unstable: developers often got errors when trying to start workspaces.
That said, although this PR may exacerbate the problem, I think that we are doing the right thing from a DevWorkspace Operator point of view. "Fixing" devfiles that do not have requests set is more of a Che problem (devworkspace generator problem) and we should address it there.
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: amisevsk, AObuchow, l0rd The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
@l0rd you raise an important point, and I think there is room for improvement here in the future. Prior to this PR, there were two competing elements -- the default limits/requests used for containers in DWO: resources:
limits:
memory: 128M
requests:
memory: 64Mand the issue being fixed here, which is that we were adding any limits/requests in contributions even if there was nothing to add it to (i.e. we were interpreting an unset CPU limit as "0" CPU, and adding to that). With this PR, we've addressed one side of the issue (contributions can restrict resources unexpectedly) but we should still address the other side (defaults can restrict resources semi-unexpectedly) I think one way to improve what we've got here would be
I'll try to put together a quick PR to test this as a solution and we can discuss the approach there. |
This sounds like a good idea to me 👍 If you get caught up with other tasks and want me to give a try at this, let me know :) |
What does this PR do?
This PR modifies the container contribution functionality for resource requirements.
With this change, the merge target's resource requirements will be summed with contribution resource requirements only for resource limits & requests that are already set in the merge target. For example, if the merge target does not set a CPU request and a container contribution does set a CPU request, the resulting CPU request of the merge target will remain unset.
This PR also moves functions related to
corev1.ResourceRequirementsinto/pkg/library/resources/.What issues does this PR fix or reference?
Fix #1056
Is it tested? How?
I added an automated test for all 4 types of container contributions (normal, spec, implicit and implicit-spec) which verifies that resource requirements are only summed if they are defined in the merge target component.
Additionally, the Che-Code container contribution test cases had to be modified by removing the expected CPU limit, CPU request and Memory Request as they are no longer contributed by the container contributions (since the
toolscontainer only defines a Memoy Limit)Verify #1056 is fixed
kubectl apply -f ./samples/code-latest.yaml -n $NAMESPACEcode-latestworkspace (kubectl describe code-latest -n $NAMESPACE) and ensure thedevcontainer component does not have its CPU Request, CPU Limit, Memory Request or Memory Limit set./test v8-devworkspace-operator-e2e, v8-che-happy-pathto trigger)v8-devworkspace-operator-e2e: DevWorkspace e2e testv8-che-happy-path: Happy path for verification integration with Che