Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No data shown in Grafana Dashboards #468

Open
Loahrs opened this issue Mar 7, 2024 · 5 comments
Open

No data shown in Grafana Dashboards #468

Loahrs opened this issue Mar 7, 2024 · 5 comments

Comments

@Loahrs
Copy link

Loahrs commented Mar 7, 2024

I noticed that I don't see any data in my Grafana Dashboards. I hoped for this problem to be fixed after updating to the latest version of the chart (3.0.0. -> 3.3.0) but it persists ever since. All dashboards are showing no data.

I checked Grafanas settings and see that a Prometheus datasource is configured (http://pulsar-kube-prometheus-sta-prometheus.default:9090). If I click on "Test" to test the connection, I receive "Succesfully queried the Prometheus API".

After that I opened the Prometheus UI and checked the configuration under http://prometheus-address:9090/config. In it I see a bunch of jobs related to Pulsar:

job_name: podMonitor/default/pulsar-zookeeper/0
job_name: podMonitor/default/pulsar-proxy/0
job_name: podMonitor/default/pulsar-broker/0
job_name: podMonitor/default/pulsar-bookie/0

Looking up the Metrics Explorer I can't see any Pulsar related metrics.

I'll post here my values.yaml:

clusterName: cluster-a
namespace: pulsar
namespaceCreate: false
initialize: false

auth:
  authentication:
    enabled: true
    jwt:
      usingSecretKey: false
    provider: jwt
  authorization:
    enabled: true
  superUsers:
    broker: broker-admin
    client: admin
    proxy: proxy-admin

broker:
  configData:
    proxyRoles: proxy-admin

certs:
  internal_issuer:
    enabled: true
    type: selfsigning

components:
  pulsar_manager: false

tls:
  broker:
    enabled: true
  enabled: true
  proxy:
    enabled: true
  zookeeper:
    enabled: true

I installed the Pulsar helm chart into a namespace "pulsar" and noticed that all Grafana-Stack related components were installed into the "default" namespace.

Could this be an issue?
I also enabled authentication/authorization, maybe the issue has to do with that?

@lhotari
Copy link
Member

lhotari commented Mar 27, 2024

This might be caused by the configured authentication. I guess metrics requires a token currently.

@lhotari
Copy link
Member

lhotari commented Mar 27, 2024

for the broker authenticateMetricsEndpoint defaults to false, so it might be something else.

@lerodu
Copy link

lerodu commented Dec 28, 2024

I was having the same issue. Could resolve it by installing the helm chart into 'default' namespace rather than 'pulsar'

@lhotari
Copy link
Member

lhotari commented Dec 28, 2024

I was having the same issue. Could resolve it by installing the helm chart into 'default' namespace rather than 'pulsar'

thanks, @lerodu. Most likely this could be resolved by configuring kube-prometheus-stack.prometheus.prometheusSpec.podMonitorNamespaceSelector (docs) in values.yaml.

Something like this

kube-prometheus-stack:
  prometheus:
    prometheusSpec:
      podMonitorNamespaceSelector:
        matchLabels: {}

@lhotari
Copy link
Member

lhotari commented Dec 28, 2024

I was having the same issue. Could resolve it by installing the helm chart into 'default' namespace rather than 'pulsar'

thanks, @lerodu. Most likely this could be resolved by configuring kube-prometheus-stack.prometheus.prometheusSpec.podMonitorNamespaceSelector (docs) in values.yaml.

Something like this

kube-prometheus-stack:
  prometheus:
    prometheusSpec:
      podMonitorNamespaceSelector:
        matchLabels: {}

It might be actually related to this: https://github.com/prometheus-operator/kube-prometheus/blob/main/docs/customizations/monitoring-additional-namespaces.md#monitoring-additional-namespaces

In order to monitor additional namespaces, the Prometheus server requires the appropriate Role and RoleBinding to be able to discover targets from that namespace. By default the Prometheus server is limited to the three namespaces it requires: default, kube-system and the namespace you configure the stack to run in via $.values.namespace.

Also mentioned at https://prometheus-operator.dev/kube-prometheus/kube/monitoring-other-namespaces/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants