Skip to content
This repository has been archived by the owner on Jun 4, 2021. It is now read-only.

Removing limits on Kafka controller #462

Merged
merged 1 commit into from
Jun 12, 2019
Merged

Removing limits on Kafka controller #462

merged 1 commit into from
Jun 12, 2019

Conversation

matzew
Copy link
Member

@matzew matzew commented Jun 12, 2019

Fixes #457 (for Kafka). Thanks @nicolaferraro for filing the (generic) issue

@googlebot googlebot added the cla: yes Indicates the PR's author has signed the CLA. label Jun 12, 2019
@knative-prow-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: matzew

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@knative-prow-robot knative-prow-robot added size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. approved Indicates a PR has been approved by an approver from all required OWNERS files. labels Jun 12, 2019
@duglin
Copy link

duglin commented Jun 12, 2019

Why are we allowing pods to each up as much resources as they want?
It would be better to just raise the limit instead, if needed

@matzew
Copy link
Member Author

matzew commented Jun 12, 2019

I'd think that if we add those, we need to have confidence in the actual values - I don't see value in blind guesses, so to have some, just to have some ... See: https://bugzilla.redhat.com/show_bug.cgi?id=1714183

@duglin
Copy link

duglin commented Jun 12, 2019

agreed to trying to pick smarter values, so let's grab the values used to fix that issue instead of just removing them

@evankanderson
Copy link
Member

Limits (and requests) may also need to be dynamic based on cluster size/number of events in flight. Vertical Pod Autoscaling might be one way to set those limits dynamically.

@evankanderson
Copy link
Member

/lgtm

@knative-prow-robot knative-prow-robot added the lgtm Indicates that a PR is ready to be merged. label Jun 12, 2019
@knative-prow-robot knative-prow-robot merged commit 3513cfb into knative:master Jun 12, 2019
@alanconway
Copy link
Contributor

What is it we are trying to limit?

a) Ensure enough resources to get to some "standard" steady state of operation.
b) Ensure enough resources to maintain some desired scale/size/load.
c) Ensure no more resources than the system designer is willing to give it.

We can do a) in theory and it would be great if we provided scaling guides for people trying to figure out b) and c), but it's not trivial. We need to understand and measure relationship between scale, load and resources in our code and in our dependencies (Kafka libs etc.) I'm not saying we shouldn't, but it is extra documentation, testing and benchmarking, and it will impact the pace of innovation.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cla: yes Indicates the PR's author has signed the CLA. lgtm Indicates that a PR is ready to be merged. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Camel controller manager sometimes does not start
6 participants