-
Notifications
You must be signed in to change notification settings - Fork 106
[kiali] use kiali operator to install and manage kiali #556
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good from an install standpoint, will need to get feedback from people familiar with the operator as well and to help solve the issues you ran into.
I think the CRDs probably should he in the base, but I would wait for confirmation
Thanks for working on this!
Is that expectation that people will use fullSpec and the old stuff is around for backwards compatiblity, or as a "break glass" mechanism?
ansible.operator-sdk/reconcile-period: "0s" | ||
spec: | ||
{{- if .Values.kiali.fullSpec }} | ||
{{ toYaml .Values.kiali.fullSpec | indent 2 }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is a nice implementation
I was considering But I think it would be easiest if we just recommend people use "fullSpec" and put in the Kiali CR yaml right here (thus, eventually making kialicr.yaml template very small and simple). Otherwise, we are going to be forced to duplicate settings in values.yaml and pass those settings individually to the kialicr.yaml template (as this PR is doing in order to maintain backward compat with the settings we have today - making kialicr.yaml more complex). This becomes a maintenance problem because it means the values.yaml has to track and keep up to date with any settings that Kiali CR adds or changes. This is why I think we should recommend the user to put Kiali CR yaml directly in values.yaml via fullSpec. |
100% agree with you there. Would it make sense to just call this |
We could do that, yes. No reason why it has to be called I do want to bring up one issue that may be a problem with this Because this For example, But there are a few other examples which I do not have an answer for, such as Kiali needs to be told where certain components live via the istio_component_namespaces:
grafana: "{{ .Values.global.telemetryNamespace }}"
pilot: "{{ .Values.global.configNamespace }}"
prometheus: "{{ .Values.global.prometheusNamespace }}"
tracing: "{{ .Values.global.telemetryNamespace }}" The other place where this is a problem is where we need to specify the affinity settings: affinity:
{{- include "nodeaffinity" . | indent 6 }}
{{- include "podAntiAffinity" . | indent 6 }} I do not believe we can ship with a values.yaml that only has I believe this kind of problem can be worked around once this Helm feature is merged and available: helm/helm#6876 |
We have a similar problem with the injection template. Since that is rendered by the sidecar injector rather than helm we added a It's possible we don't need to support direct helm installation, and maybe the operator could add this functionality, but I think it's just calling out to helm anyways so maybe not. I'll think about it some more and maybe there's another way around it |
This most likely has to do with the use of |
This incorporates the changes for issue kiali/kiali#1903 |
031ebe9
to
3505879
Compare
598190b
to
80b73cf
Compare
Right now, I'm blocked waiting on istio/operator#622 - I do not know all the knobs and dials to tweek to get an istio/operator build completed and pulled in for use with this PR (this PR cannot work on its own - it needs istio/operator changes because we are adding in a new kiali-operator component). Someone should write a document to assist new developers on the project to come up to speed on how to code in the istio/operator and istio/installer repos - right now, it is a mystery to me how one adds a new component and gets operator to be pulled into the installer for local testing. |
@ostromart for operator. |
@jmazzitelli your not alone -> istio/istio#19146 (comment) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few minor problems. Overall really great work! Keep at it. We definitely want to head in the direction of using operators for all of our third-party components and integrating the operator in a similar fashion (or maybe even simpler than this integration).
The perfect integration would be "here is the image, launch it as a pod" and CRs are responsible for the rest of the magic. Thoughts for longer term planning?
sidecar.istio.io/inject: "false" | ||
scheduler.alpha.kubernetes.io/critical-pod: "" | ||
prometheus.io/scrape: "true" | ||
prometheus.io/port: "8383" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should this port not be 42422
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, it is 8383. This is the metrics port that base ansible operator SDK opens up and exposes its metrics -- see https://github.com/operator-framework/operator-sdk/blob/v0.9.x/pkg/ansible/run.go#L43
So these are operator metrics (the operator itself exposes its own metrics to monitor itself).
See: kiali/kiali#1561 (that issue documents a snippet of the metrics that prometheus will be able to scrape on the operator's port 8383).
name: runner | ||
env: | ||
- name: WATCH_NAMESPACE | ||
value: "" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this watch namespace wildcard watch everything? I'm not sure this is in policy for Istio's security. @istio/wg-security-maintainers should provide feedback here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the WATCH_NAMESPACE that allows the Kiali operator to watch all namespaces. i.e. This is the typical operator InstallMode of "AllNamespaces". It is how you can install a single operator and be able to install Kiali where ever a user has permission to (thus you need only have a single operator in your cluster). This is how we support multi-tenancy (at least its how it works with Maistra).
You have a single operator (who is installed typically by a cluster admin). Then you have users who may or may not be cluster admin - but who have permission to create a Kiali CR in a namespace where Kiali is to be deployed. This way you can have one tenant owner put a Kiali in one namespace/control plane and have a second tenant (who has no permission to see that first control plane) install Kiali in a different namespace/control plane. This is typical operator use-case. See this for a more general description of what is going on: https://medium.com/@jmazzite/kubernetes-operators-a-very-brief-overview-270e75f3dfab
That said, if Istio does not want to support multi-tenancy in this way, we can put the operator in the same namespace as the control plane itself (e.g. istio-system), but this would be kinda weird putting the operator together with Kiali in the control plane namespace, and if you have multiple control planes you would then be required to install multiple operators (when you really shouldn't have to).
tolerations: [] | ||
replicaCount: 1 | ||
|
||
createDemoSecret: false # When true, a secret will be created with a default username and password. Useful for demos. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We eventually want to get Istio to the state where everything is true
- or turned on by default. As this extends the API, can you give some thought to a better name for line 11 that matches with this goal?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was a setting introduced a while ago - @linsun asked for this because at the time when users were installing the demo profile, we were still requiring them to create a secret and some people forgot or didn't know and were not able to access Kiali. So it was decided to create a demo secret for the demo profile to avoid that problem. See this issue for background and discussion about this: istio/istio#11244
So I don't know what we want to do here. We could remove it, and always require the user to create secrets - you can see where we tell people to do this in the Task doc here: https://istio.io/docs/tasks/observability/kiali/#create-a-secret
Or we could have a default authentication strategy that doesn't require a secret to contain credentials. In fact, Kiali does not require a secret by default if installing in OpenShift - the Kiali operator just sets the auth.strategy to "openshift" which is an oauth mechanism to simply allow the user to log in using his/her OpenShift login credentials. There is another auth.strategy called "anonymous" which doesn't require the user to use any credentials - if a user has the Kiali URL, the user will gain access to the Kiali UI (so that also does not require a secret and may be a good alternative for the demo profile).
f9efd4e
to
d163a77
Compare
See istio/operator#622 (comment) for the current state of affairs. |
PR istio#548 for issue istio/istio#18819 by adding excludedWorkloads
6b1cf95
to
9934cd4
Compare
@jmazzitelli Please add a make target with the steps to test the kiali operator and kiali. At least, we need to test and ensure everything can be installed successfully. |
/retest |
@jmazzitelli: The following tests failed, say
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
This stuff isn't ready yet. Waiting for the monorepo effort to complete this work. |
(work on this PR is on hold until the monorepo PR is merged, then we can merge the work done here and in PR istio/operator#622)
This should supercede this PoC PR: #295 (that one is not complete and I'm not sure will work the way we want.)
The gist of what is trying to be done here is:
If
--set Values.kiali.operator.enabled=true
, the operator is to be deployed in which case the following should happen:""
thus allowing this one operator to install Kiali anywhere it finds a Kiali CR (this will support multi-tenancy if/when supported in the future).If
--set Values.kiali.enabled=true
, Kiali should be deployed in which case the following should happen (notice, you can install Kiali if the operator was already running - you just pass in--set Values.kiali.operator.enabled=false
so istioctl doesn't try to install another one).Notice that this implementation allows a user to wholly configure the Kiali CR within values.yaml by specifying the
kiali.fullSpec
value. This will allow a user to be able to configure any setting within Kiali - see https://github.com/kiali/kiali/blob/master/operator/deploy/kiali/kiali_cr.yaml for all the settings available.There is a problem with this PR, though. When I try to test it by passing in
--set installPackagePath=/git/istio.io/installer
(so it picks up my local changes), for some reason thekiali-operator
namespace (the namespace where the kiali operator resources are to be deployed) gets immediately pruned after it is created. The--verbose --logtostderr
output has this in it:I don't understand why
namespace/kiali-operator pruned
is happening.There are a couple questions that need to be answered before we can use this PR: