Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add scrape config for Service Catalog controller #18694

Closed
wants to merge 1 commit into from

Conversation

jboyd01
Copy link
Contributor

@jboyd01 jboyd01 commented Feb 21, 2018

new scrape configuration for pulling metrics from Service Catalog

@openshift-ci-robot openshift-ci-robot added the size/S Denotes a PR that changes 10-29 lines, ignoring generated files. label Feb 21, 2018
@jboyd01 jboyd01 force-pushed the add-catalog-to-prometheus branch from 675b180 to c98a943 Compare February 21, 2018 15:27
@openshift-ci-robot openshift-ci-robot added size/M Denotes a PR that changes 30-99 lines, ignoring generated files. and removed size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels Feb 21, 2018
@jboyd01
Copy link
Contributor Author

jboyd01 commented Feb 21, 2018

/retest

1 similar comment
@jboyd01
Copy link
Contributor Author

jboyd01 commented Feb 21, 2018

/retest

@jboyd01
Copy link
Contributor Author

jboyd01 commented Feb 22, 2018

/test extended_clusterup

@jboyd01
Copy link
Contributor Author

jboyd01 commented Feb 26, 2018

@mfojtik are you the appropriate person to review this? This change enables Prometheus to pull metrics from Service Catalog.

@pmorie
Copy link
Contributor

pmorie commented Feb 28, 2018

@ironcladlou, can you review this?

@jeremyeder
Copy link
Contributor

out of curiosity is there a place to see sample output of a single scrape?

@ironcladlou
Copy link
Contributor

Nothing stands out at me as wrong here, but @jeremyeder or @zgalor might be better positioned to review this config in the context of our current Prometheus deployment re: the labelling.

@jboyd01
Copy link
Contributor Author

jboyd01 commented Feb 28, 2018

@jeremyeder A limited set from Catalog:

# TYPE servicecatalog_broker_service_class_count gauge
servicecatalog_broker_service_class_count{broker="ups-broker"} 2
# HELP servicecatalog_broker_service_plan_count Number of services classes by Broker.
# TYPE servicecatalog_broker_service_plan_count gauge
servicecatalog_broker_service_plan_count{broker="ups-broker"} 3
# HELP servicecatalog_osb_request_count Cumulative number of HTTP requests from the OSB Client to the specified Service Broker grouped by broker name, broker method, and response status.
# TYPE servicecatalog_osb_request_count counter
servicecatalog_osb_request_count{broker="ups-broker",method="Bind",status="2xx"} 8
servicecatalog_osb_request_count{broker="ups-broker",method="DeprovisionInstance",status="2xx"} 2
servicecatalog_osb_request_count{broker="ups-broker",method="GetCatalog",status="2xx"} 1
servicecatalog_osb_request_count{broker="ups-broker",method="ProvisionInstance",status="2xx"} 3
servicecatalog_osb_request_count{broker="ups-broker",method="Unbind",status="2xx"} 2

and another 40 or so from Go and Process info:

# TYPE apiserver_audit_event_total counter
apiserver_audit_event_total 0
# HELP apiserver_client_certificate_expiration_seconds Distribution of the remaining lifetime on the certificate used to authenticate a request.
# TYPE apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="21600"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="43200"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="86400"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="172800"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="345600"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="604800"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="2.592e+06"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="7.776e+06"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="1.5552e+07"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="3.1104e+07"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="+Inf"} 0
apiserver_client_certificate_expiration_seconds_sum 0
apiserver_client_certificate_expiration_seconds_count 0
# HELP etcd_helper_cache_entry_count Counter of etcd helper cache entries. This can be different from etcd_helper_cache_miss_count because two concurrent threads can miss the cache and generate the same entry twice.
# TYPE etcd_helper_cache_entry_count counter
etcd_helper_cache_entry_count 0
# HELP etcd_helper_cache_hit_count Counter of etcd helper cache hits.
# TYPE etcd_helper_cache_hit_count counter
etcd_helper_cache_hit_count 0
# HELP etcd_helper_cache_miss_count Counter of etcd helper cache miss.
# TYPE etcd_helper_cache_miss_count counter
etcd_helper_cache_miss_count 0
# HELP etcd_request_cache_add_latencies_summary Latency in microseconds of adding an object to etcd cache
# TYPE etcd_request_cache_add_latencies_summary summary
etcd_request_cache_add_latencies_summary{quantile="0.5"} NaN
etcd_request_cache_add_latencies_summary{quantile="0.9"} NaN
etcd_request_cache_add_latencies_summary{quantile="0.99"} NaN
etcd_request_cache_add_latencies_summary_sum 0
etcd_request_cache_add_latencies_summary_count 0
# HELP etcd_request_cache_get_latencies_summary Latency in microseconds of getting an object from etcd cache
# TYPE etcd_request_cache_get_latencies_summary summary
etcd_request_cache_get_latencies_summary{quantile="0.5"} NaN
etcd_request_cache_get_latencies_summary{quantile="0.9"} NaN
etcd_request_cache_get_latencies_summary{quantile="0.99"} NaN
etcd_request_cache_get_latencies_summary_sum 0
etcd_request_cache_get_latencies_summary_count 0
# HELP go_gc_duration_seconds A summary of the GC invocation durations.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 4.9041e-05
go_gc_duration_seconds{quantile="0.25"} 8.935e-05
go_gc_duration_seconds{quantile="0.5"} 0.000188204
go_gc_duration_seconds{quantile="0.75"} 0.000834593
go_gc_duration_seconds{quantile="1"} 0.099981701
go_gc_duration_seconds_sum 0.655398983
go_gc_duration_seconds_count 31
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 136
# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 5.30864e+06
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 8.5111864e+07
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
# TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes 1.48075e+06
# HELP go_memstats_frees_total Total number of frees.
# TYPE go_memstats_frees_total counter
go_memstats_frees_total 566686
# HELP go_memstats_gc_cpu_fraction The fraction of this program's available CPU time used by the GC since the program started.
# TYPE go_memstats_gc_cpu_fraction gauge
go_memstats_gc_cpu_fraction 0.0043100558221369776
# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata.
# TYPE go_memstats_gc_sys_bytes gauge
go_memstats_gc_sys_bytes 651264
# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use.
# TYPE go_memstats_heap_alloc_bytes gauge
go_memstats_heap_alloc_bytes 5.30864e+06
# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used.
# TYPE go_memstats_heap_idle_bytes gauge
go_memstats_heap_idle_bytes 3.4816e+06
# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use.
# TYPE go_memstats_heap_inuse_bytes gauge
go_memstats_heap_inuse_bytes 8.380416e+06
# HELP go_memstats_heap_objects Number of allocated objects.
# TYPE go_memstats_heap_objects gauge
go_memstats_heap_objects 26443
# HELP go_memstats_heap_released_bytes Number of heap bytes released to OS.
# TYPE go_memstats_heap_released_bytes gauge
go_memstats_heap_released_bytes 0
# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system.
# TYPE go_memstats_heap_sys_bytes gauge
go_memstats_heap_sys_bytes 1.1862016e+07
# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.
# TYPE go_memstats_last_gc_time_seconds gauge
go_memstats_last_gc_time_seconds 1.5198505693627717e+09
# HELP go_memstats_lookups_total Total number of pointer lookups.
# TYPE go_memstats_lookups_total counter
go_memstats_lookups_total 695
# HELP go_memstats_mallocs_total Total number of mallocs.
# TYPE go_memstats_mallocs_total counter
go_memstats_mallocs_total 593129
# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures.
# TYPE go_memstats_mcache_inuse_bytes gauge
go_memstats_mcache_inuse_bytes 13888
# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system.
# TYPE go_memstats_mcache_sys_bytes gauge
go_memstats_mcache_sys_bytes 16384
# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures.
# TYPE go_memstats_mspan_inuse_bytes gauge
go_memstats_mspan_inuse_bytes 140448
# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system.
# TYPE go_memstats_mspan_sys_bytes gauge
go_memstats_mspan_sys_bytes 180224
# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place.
# TYPE go_memstats_next_gc_bytes gauge
go_memstats_next_gc_bytes 1.0520256e+07
# HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations.
# TYPE go_memstats_other_sys_bytes gauge
go_memstats_other_sys_bytes 2.175178e+06
# HELP go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator.
# TYPE go_memstats_stack_inuse_bytes gauge
go_memstats_stack_inuse_bytes 1.769472e+06
# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator.
# TYPE go_memstats_stack_sys_bytes gauge
go_memstats_stack_sys_bytes 1.769472e+06
# HELP go_memstats_sys_bytes Number of bytes obtained from system.
# TYPE go_memstats_sys_bytes gauge
go_memstats_sys_bytes 1.8135288e+07
# HELP go_threads Number of OS threads created
# TYPE go_threads gauge
go_threads 17
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 1.89
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 1.048576e+06
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 23
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 2.5088e+07
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1.51985030164e+09
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 8.82532352e+08

And that is why I'm thinking about dropping the Go and Process stats (ie our email). And yes, its cluster wide, only a single Catalog Controller Manager is ever actively answering scrapes.

@jeremyeder
Copy link
Contributor

Personally haven't used, and haven't heard of anyone else using those go stats (so, up to you). Process stats we already get get from cadvisor as long as it runs in a systemd unit or as a pod.

@jboyd01
Copy link
Contributor Author

jboyd01 commented Feb 28, 2018

Yes, its run within a pod. I'll make the changes upstream to remove both.

@jeremyeder
Copy link
Contributor

One thing that comes to mind is if it will ever return non-2xx codes? non-2xx is something we could not only alert on, but also trend over time / over versions of SB.

I just didn't see 4xx or 5xx in your example, is why I'm asking.

@jboyd01
Copy link
Contributor Author

jboyd01 commented Feb 28, 2018

My sample metrics are too simple. The servicecatalog_osb_request_count metric is dynamicly grouping counts into 2xx, 3xx, 4xx, 5xx buckets: https://github.com/kubernetes-incubator/service-catalog/blob/master/pkg/metrics/osbclientproxy/osbproxy.go#L159-L170

We should be able to alert on these, that is certainly one my upcoming tasks.

- role: pod

relabel_configs:
- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_pod_name]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since you're only looking for pods in the kube-service-catalog namespace, it would be better to tell it upfront in the k8s sd configuration rather than use relabeling:

      - job_name: 'openshift-service-catalog'
        scheme: http

        kubernetes_sd_configs:
        - role: pod

        namespaces:
          names:
          - kube-service-catalog

        relabel_configs:
        - source_labels: [__meta_kubernetes_pod_name]
          action: keep
          regex: controller-manager-(.+)

I've got #17683 pending for fixing the existing jobs but it needs a rebase.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great, that makes sense, thanks for the details. I've updated the config accordingly.

@jboyd01 jboyd01 force-pushed the add-catalog-to-prometheus branch from c98a943 to 1ef4fb1 Compare March 1, 2018 14:16
@openshift-ci-robot
Copy link

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: jboyd01
To fully approve this pull request, please assign additional approvers.
We suggest the following additional approver: mfojtik

Assign the PR to them by writing /assign @mfojtik in a comment when ready.

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@jboyd01 jboyd01 force-pushed the add-catalog-to-prometheus branch from 1ef4fb1 to 40bc1bd Compare March 1, 2018 14:17
@jboyd01
Copy link
Contributor Author

jboyd01 commented Mar 1, 2018

/test extended_conformance_install

@jboyd01
Copy link
Contributor Author

jboyd01 commented Mar 1, 2018

/test extended_image_ecosystem

@jboyd01
Copy link
Contributor Author

jboyd01 commented Mar 2, 2018

@simonpasquier or @jeremyeder all review comments have been addressed. Can I get a final review and if all is good a merge? Thanks!

@jboyd01 jboyd01 added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Mar 7, 2018
@jboyd01
Copy link
Contributor Author

jboyd01 commented Mar 7, 2018

It was pointed out that I'm exposing metrics over HTTP and it should instead be secured.

@openshift-bot
Copy link
Contributor

@jboyd01: PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-bot openshift-bot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Apr 1, 2018
@jboyd01
Copy link
Contributor Author

jboyd01 commented Apr 12, 2018

closed in favor of #19286

@jboyd01 jboyd01 closed this Apr 12, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants