-
Notifications
You must be signed in to change notification settings - Fork 115
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New annotations for safety #241
Comments
How would this work? Pruning happens when the template is not in the set, so we aren't going to see an annotation on something we mistakenly don't have. We're also just using the Perhaps that could be made a feature of k8s-template-validator instead... CI time is probably more ideal than deploy time for catching things like that.
Would this be looked at during validation or verification? In other words, are you thinking we should:
|
I was thinking this would have to be done at run-time since we don't let CI talk to the cluster.
I was thinking this would add code that verifies there are at least n replicas desired and if not scales the deploy up to that amount. I'd lean towards putting this in the deploy phase, but could see it being a new step. |
It doesn't really need to talk to the cluster though, does it? Fundamentally, the feature boils down to "make sure this template is in the set", which is perfectly doable locally. Honestly I think this is something that should be done ahead of time using an external list, and kubernetes-deploy should be able to assume that you actually want to deploy what you've given it.
It applies to everything. The PR in question basically lets you operate the same as you usually would, but at a sub-namespace level. It's pretty cool, but I don't think it's relevant here.
Safety features feel like a no-brainer, but I'm still hesitating about this for some reason. One thing that feels a bit off is that it introduces a new kind of responsibility to the gem, basically a naive metric-less HPA, as you pointed out. Another is that there are already three ways I can think of to manage replicas: cc @klautcomputing @stefanmb any opinions on these features? |
After giving it more thought I think you're right that However, I still would like to hear what others think about |
I think I'm missing some context:
Why doesn't hardcoding the replica count solve the issue? I'm not a fan of coercing user requested values into sane defaults because I think it masks other underlying issues. |
We could hard-code the replicas into the templates, but it comes with its own downside. The biggest would be that it takes a deploy to scale, and that's something that's on the order 10s of minutes.
They're being managed manually, because we're still modifying it frequently enough that doing it via a deploy would be painful. And I personally think we wont ever be able to use a custom-metric for scaling. The min safe would change, but even something out of date would be a better default than |
Closing as wont implement. |
Motivation
We recently had a deployment that pruned a k8s web server deployment. This was surprisingly easy to do with erb. When we reverted the pr and re-deployed the resource was re-created but because we didn't hard code the replica count it came back with only 1 resource. Here are two proposed features that would add some safety checks.
Features
@KnVerey
The text was updated successfully, but these errors were encountered: