-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubeletstatsreceiver: Add ability to collect detailed data from PVC #743
Conversation
Depends on #690. Will rebase once that's merged. |
Codecov Report
@@ Coverage Diff @@
## master #743 +/- ##
==========================================
+ Coverage 88.06% 88.13% +0.06%
==========================================
Files 233 233
Lines 12359 12428 +69
==========================================
+ Hits 10884 10953 +69
Misses 1120 1120
Partials 355 355
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
look like it includes changes from the previous PR. could you rebase please? |
1a46a26
to
ebf3db4
Compare
@@ -86,7 +91,7 @@ func (r *runnable) Run() error { | |||
} | |||
|
|||
metadata := kubelet.NewMetadata(r.extraMetadataLabels, podsMetadata) | |||
mds := kubelet.MetricsData(r.logger, summary, metadata, typeStr, r.metricGroupsToCollect) | |||
mds := kubelet.MetricsData(r.logger, summary, metadata, typeStr, r.metricGroupsToCollect, r.getPersistentVolumeLabelsFromClaim) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can't we add conditionally prefetched volumeMetadata
to the metadata
structure above instead of passing around volumeClaimLabelsSetter
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess we could that instead. It also seems like doing that might make it cleaner to cache information about volumes. So, the Metadata
struct will gather all the required metadata about volumes upfront and later on map it back to the corresponding volumes while actually collecting the metrics. Will try that out and push an update.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right now, as errors are encountered while collecting metadata about volumes, they're logged and metric collection for that specific volume is skipped. If the receiver were to collect all the metadata upfront, to achieve the current behavior, it would also need to keep track of the errors along with the collected metadata so we can log out the exact error. To avoid that, I moved the volumes metadata setter method to the Metadata
struct for now but it's still invoked when the metrics are collected. Let me know what you think.
238ad2c
to
ce31ed0
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall, looks good
|
||
If `k8s_api_config` set, the receiver will attempt to collect metadata from underlying storage resources for | ||
Persistent Volume Claims. For example, if a Pod is using a PVC backed by an EBS instance on AWS, the receiver | ||
would set the `k8s.volume.type` label to be `awsElasticBlockStore` rather than `persistentVolumeClaim`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we put couple more sentences describing extra metadata labels that would be added from underlying storage resources? Based on this description it looks like only k8s.volume.type
will be set.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated, added a note stating this.
rebase to pass tests? |
For pods using a Persistent Volume Claim, the Kubelet API only provides information about the claim and not the actual underlying physical resource. Add ability to optionally look up the underlying physical volume and collect labels accordingly.
9a24746
to
bf15443
Compare
Rebased, to re-trigger tests. |
Signed-off-by: Bogdan Drutu <[email protected]>
* Name the BSP tests * Add a drain wait group; use the stop wait group to avoid leaking a goroutine * Lint & comments * Fix * Use defer/recover * Restore the Add/Done... * Restore the Add/Done... * Consolidate select stmts * Disable the test * Lint * Use better recover
Description: For pods using a Persistent Volume Claim, the Kubelet API only provides information about the claim and not the actual underlying physical resource. Add ability to optionally look up the underlying physical volume and collect labels accordingly.
Testing: Added tests.
Documentation: Updated README.