Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

autoscale: init auto scale controller #3071

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

parth-gr
Copy link
Member

@parth-gr parth-gr commented Mar 3, 2025

add a Prometheus scraper to scrape OSD
used percentage

Copy link
Contributor

openshift-ci bot commented Mar 3, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: parth-gr
Once this PR has been reviewed and has the lgtm label, please assign obnoxxx for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@parth-gr parth-gr marked this pull request as draft March 3, 2025 15:17
@openshift-ci openshift-ci bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Mar 3, 2025
@parth-gr parth-gr force-pushed the autoScalePrometheusScrapper branch 7 times, most recently from 2efe8e5 to 973e5f6 Compare March 4, 2025 12:11
@parth-gr
Copy link
Member Author

parth-gr commented Mar 4, 2025

Output:

{"level":"info","ts":"2025-03-04T12:35:04Z","logger":"controllers.storageAutoScaling","msg":"scraped metrics","metrics":"{ceph_daemon=\"osd.0\", device_class=\"ssd\", endpoint=\"ceph-exporter-http-metrics\", hostname=\"paarorascaling-kgdhf-worker-1-sspgq\", instance=\"10.128.2.32:9926\", job=\"rook-ceph-exporter\", managedBy=\"ocs-storagecluster\", namespace=\"openshift-storage\", pod=\"rook-ceph-exporter-paarorascaling-kgdhf-worker-1-sspgq-5872fhdj\", service=\"rook-ceph-exporter\"} => 0.04315158843994141 @[1741091704.482]\n{ceph_daemon=\"osd.1\", device_class=\"ssd\", endpoint=\"ceph-exporter-http-metrics\", hostname=\"paarorascaling-kgdhf-worker-3-nbqb8\", instance=\"10.129.2.39:9926\", job=\"rook-ceph-exporter\", managedBy=\"ocs-storagecluster\", namespace=\"openshift-storage\", pod=\"rook-ceph-exporter-paarorascaling-kgdhf-worker-3-nbqb8-7c75cw2w\", service=\"rook-ceph-exporter\"} => 0.04315200805664063 @[1741091704.482]\n{ceph_daemon=\"osd.2\", device_class=\"ssd\", endpoint=\"ceph-exporter-http-metrics\", hostname=\"paarorascaling-kgdhf-worker-2-bjgn8\", instance=\"10.131.0.29:9926\", job=\"rook-ceph-exporter\", managedBy=\"ocs-storagecluster\", namespace=\"openshift-storage\", pod=\"rook-ceph-exporter-paarorascaling-kgdhf-worker-2-bjgn8-5d97dc28\", service=\"rook-ceph-exporter\"} => 0.04319168090820313 @[1741091704.482]"}

return storageAutoscalerReconcilerController.Complete(r)
}

// +kubebuilder:rbac:groups="monitoring.coreos.com",resources=*,verbs=get;list;watch
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@iamniting how can we set the role,
It is generating the clusterrole

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we get cluster roles only.

@parth-gr parth-gr force-pushed the autoScalePrometheusScrapper branch 3 times, most recently from 7037b74 to 90275cb Compare March 6, 2025 11:08
add a prometheus scraper to scrape osd
used percentage

Signed-off-by: parth-gr <[email protected]>
@parth-gr parth-gr force-pushed the autoScalePrometheusScrapper branch from 90275cb to a053f75 Compare March 6, 2025 12:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants