-
-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Automatic deployment of Prometheus alerts #266
Comments
@jb3 Do you have an idea for how best to do this? Right now I'm not even sure |
Noting to self, we can set the config map prefs to always query the apiserver for the latest changes, hence nullifying the propagation delay of changes. |
I lied, this is a kubelet option, we cannot set this per configmap, we will have to do some smart in-pod detection at Prometheus that the reload has gone through. There is however always a timestamp in the mounted directory, we just need to keep checking this timestamp (probably with a recurring kubectl exec). |
|
Another related issue for a potential future feature kubernetes/kubernetes#22368 (open for 7 years though, yikes!) |
Can't we check for the git diffs when the ci runs, and if we find configmap files (that we will identify following some rule/logic), we apply them ? |
Can't we check for the git diffs when the ci runs, and if we find configmap files (that we will identify following some rule/logic), we apply them ?
This is a good idea. But from my understanding, the issue was that we
don't really know when Kubernetes has rolled out the configmaps.
We could simply sleep for 10 seconds and then apply it. If eventual
consistency isn't consistent in 10 seconds, then I guess I'm done.
|
Unfortunately the settling of configmap updates cannot be guaranteed on live pods during that window, it's a scheduled job on the kubelet from memory. The Kubernetes solution is just to have a sidecar container running something like inotify or whatever the modern equivalents are and then upon detecting a change it can call out via the HTTP management API to Prometheus or (I think) send a signal to the process, I can't remember if sidecars share the same process namespace. I'll investigate this one later today. |
This is a sound idea. inotifywait in a bash script should be sufficient.
I do think that containres in the same pod share the same process
namespace, if not maybe we can configure it, if not we can use the HTTP
management API, but we need to make sure this is locked down externally.
However, with automated reloads like this we should ensure we have an
alert in case of config reload failures. We do not have this yet, do we?
|
We should be able to add an alert for this yes, I'll include it when I PR this feature in. |
Right now, changes to our Prometheus alerts need to be deployed manually.
We should incorporate a deployment for this into GitHub actions on the
main
branch such that any changes are automatically rolled out without requiring to
know the local setup.
The text was updated successfully, but these errors were encountered: