-
Notifications
You must be signed in to change notification settings - Fork 185
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
must-gather: adds output for crds of managed services #1577
must-gather: adds output for crds of managed services #1577
Conversation
/cc @nb-ohad |
/cc @subhamkrai |
@@ -73,3 +73,8 @@ for command_desc in "${commands_desc[@]}"; do | |||
COMMAND_OUTPUT_FILE=${BASE_COLLECTION_PATH}/cluster-scoped-resources/oc_output/desc_${command_desc// /_} | |||
{ oc describe "${command_desc}"; } >> "${COMMAND_OUTPUT_FILE}" | |||
done | |||
|
|||
# collect outpt for cephfilesystemsubvolumegroups.ceph.rook.io crd |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Q: why do we need to collect data for cephfilesystemsubvolumegroups separately? Shouldn't this be part of above for loop?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it can be. But for me, oc get cephfilesystemsubvolumegroups.ceph.rook.io didn't work even and saw that it's a crd. Hence have to collect it seprately.
Didn't get a new cluster to verify it, but it will be great if you can confirm what's the exact command for it.
We also had a discussion in gchats where neha mentioned that for her, oc get cephfilesystemsubvolumegroups.ceph.rook.io
is working fine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should not treat this as a separate only can we add this with other resources like filesystem crd?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We also had a discussion in gchats where neha mentioned that for her, oc get cephfilesystemsubvolumegroups.ceph.rook.io is working fine
You need to pass namespace to oc command to get the subvolumegroup created oc get cephfilesystemsubvolumegroups.ceph.rook.io -n openshift-storage
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1. It works for me. Agreed that we do not need to collect the crd. just the definition of the CR wont help
oc get cephfilesystemsubvolumegroups.ceph.rook.io -n openshift-storage
NAME AGE
cephfilesystemsubvolumegroup-storageconsumer-da3ce6e4-e148-4ddd-9d1f-3ebf84bee072 33m
consumer
oc get cephfilesystemsubvolumegroups.ceph.rook.io -n openshift-storage
NAME AGE
cephfilesystemsubvolumegroup-storageconsumer-da3ce6e4-e148-4ddd-9d1f-3ebf84bee072 33m
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You need to change the code to get the subvolumes and snapshots also default is set to csi here
svg="csi" |
@@ -73,3 +73,8 @@ for command_desc in "${commands_desc[@]}"; do | |||
COMMAND_OUTPUT_FILE=${BASE_COLLECTION_PATH}/cluster-scoped-resources/oc_output/desc_${command_desc// /_} | |||
{ oc describe "${command_desc}"; } >> "${COMMAND_OUTPUT_FILE}" | |||
done | |||
|
|||
# collect outpt for cephfilesystemsubvolumegroups.ceph.rook.io crd |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should not treat this as a separate only can we add this with other resources like filesystem crd?
@Madhu-1 Yeah, that can be done, but it would be good to do it in different PR. I will open an issue for the same. |
d6d9afe
to
2699f45
Compare
@Madhu-1 @sp98 @nehaberry updated the PR with the changes. Waiting for the cluster to verify it. |
@nehaberry @Madhu-1 can you revisit this PR. |
|
@@ -100,6 +104,8 @@ done | |||
# NOTE: This is a temporary fix for collecting the storagecluster as we are not able to collect the storagecluster using the inspect command | |||
{ oc get storageclusters -n ${INSTALL_NAMESPACE} -o yaml; } > "$BASE_COLLECTION_PATH/namespaces/${INSTALL_NAMESPACE}/oc_output/storagecluster.yaml" 2>&1 | |||
{ oc get storagesystem -n ${INSTALL_NAMESPACE} -o yaml; } > "$BASE_COLLECTION_PATH/namespaces/${INSTALL_NAMESPACE}/oc_output/storagesystem.yaml" 2>&1 | |||
{ oc get storageconsumer -n ${INSTALL_NAMESPACE} -o yaml; } > "$BASE_COLLECTION_PATH/namespaces/${INSTALL_NAMESPACE}/oc_output/storageconsumer.yaml" 2>&1 | |||
{ oc get cephfilesystemsubvolumegroups.ceph.rook.io -n ${INSTALL_NAMESPACE} -o yaml; } > "$BASE_COLLECTION_PATH/namespaces/${INSTALL_NAMESPACE}/oc_output/cephfilesystemsubvolumegroups.ceph.rook.io.yaml" 2>&1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cephfilesystemsubvolumegroups should be treated the same as cephblockpool (its again Rook specific CRD) I see this is already collected at gather_ceph_resources
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, will remove this part. in gather_ceph_resources only yaml is getting collected and as mentioned by @nehaberry we also need details for oc get and desc of cephfilesystemsubvolumegroups.ceph.rook.io
, hence collecting those as well.
this coomit collect yamls and describe outputs of the new CRs created for ODF to ODF Managed services in 4.10 Signed-off-by: yati1998 <[email protected]>
2699f45
to
f0ff38d
Compare
@@ -60,6 +62,8 @@ commands_desc+=("storagecluster") | |||
commands_desc+=("volumesnapshot -A") | |||
commands_desc+=("volumesnapshotclass") | |||
commands_desc+=("volumesnapshotcontent") | |||
commands_desc+=("storageconsumer") | |||
commands_desc+=("cephfilesystemsubvolumegroups.ceph.rook.io") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Need to remove from here and line 55 as well? i don't see we are doing the same for cephblockpool. can you please confirm
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah we aren't, but as I mentioned that @nehaberry wants to collect output for oc get and desc commands which is not done by ceph_resources. Neha has mentioned the details in bug description.
Do you think that is not required and we can remove that as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@nehaberry do we need to treat this one more special that other ceph resources?
/retest |
@yati1998: The following test failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
/lgtm |
@agarwal-mudit: once the present PR merges, I will cherry-pick it on top of release-4.10 in a new PR and assign it to you. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: agarwal-mudit, yati1998 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@agarwal-mudit: new pull request created: #1599 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
this coomit collect yamls and describe outputs of the
new CRs created for ODF to ODF Managed services in 4.10
Signed-off-by: yati1998 [email protected]