Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

must-gather: adds output for crds of managed services #1577

Merged

Conversation

yati1998
Copy link
Member

@yati1998 yati1998 commented Mar 8, 2022

this coomit collect yamls and describe outputs of the
new CRs created for ODF to ODF Managed services in 4.10

Signed-off-by: yati1998 [email protected]

@agarwal-mudit
Copy link
Member

/cc @nb-ohad
/cc @sp98
/cc @iamniting

@openshift-ci openshift-ci bot requested review from iamniting, nb-ohad and sp98 March 8, 2022 14:15
@iamniting
Copy link
Member

iamniting commented Mar 9, 2022

/cc @subhamkrai

@@ -73,3 +73,8 @@ for command_desc in "${commands_desc[@]}"; do
COMMAND_OUTPUT_FILE=${BASE_COLLECTION_PATH}/cluster-scoped-resources/oc_output/desc_${command_desc// /_}
{ oc describe "${command_desc}"; } >> "${COMMAND_OUTPUT_FILE}"
done

# collect outpt for cephfilesystemsubvolumegroups.ceph.rook.io crd
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Q: why do we need to collect data for cephfilesystemsubvolumegroups separately? Shouldn't this be part of above for loop?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it can be. But for me, oc get cephfilesystemsubvolumegroups.ceph.rook.io didn't work even and saw that it's a crd. Hence have to collect it seprately.
Didn't get a new cluster to verify it, but it will be great if you can confirm what's the exact command for it.
We also had a discussion in gchats where neha mentioned that for her, oc get cephfilesystemsubvolumegroups.ceph.rook.io is working fine.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should not treat this as a separate only can we add this with other resources like filesystem crd?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We also had a discussion in gchats where neha mentioned that for her, oc get cephfilesystemsubvolumegroups.ceph.rook.io is working fine

You need to pass namespace to oc command to get the subvolumegroup created oc get cephfilesystemsubvolumegroups.ceph.rook.io -n openshift-storage

Copy link
Member

@nehaberry nehaberry Mar 10, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1. It works for me. Agreed that we do not need to collect the crd. just the definition of the CR wont help

oc get cephfilesystemsubvolumegroups.ceph.rook.io -n openshift-storage
NAME AGE
cephfilesystemsubvolumegroup-storageconsumer-da3ce6e4-e148-4ddd-9d1f-3ebf84bee072 33m

consumer

oc get cephfilesystemsubvolumegroups.ceph.rook.io -n openshift-storage
NAME AGE
cephfilesystemsubvolumegroup-storageconsumer-da3ce6e4-e148-4ddd-9d1f-3ebf84bee072 33m

Copy link
Member

@Madhu-1 Madhu-1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You need to change the code to get the subvolumes and snapshots also default is set to csi here

as we have a new subvolumegroup we need to loop through it also.

@@ -73,3 +73,8 @@ for command_desc in "${commands_desc[@]}"; do
COMMAND_OUTPUT_FILE=${BASE_COLLECTION_PATH}/cluster-scoped-resources/oc_output/desc_${command_desc// /_}
{ oc describe "${command_desc}"; } >> "${COMMAND_OUTPUT_FILE}"
done

# collect outpt for cephfilesystemsubvolumegroups.ceph.rook.io crd
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should not treat this as a separate only can we add this with other resources like filesystem crd?

@yati1998
Copy link
Member Author

You need to change the code to get the subvolumes and snapshots also default is set to csi here

as we have a new subvolumegroup we need to loop through it also.

@Madhu-1 Yeah, that can be done, but it would be good to do it in different PR. I will open an issue for the same.

@yati1998
Copy link
Member Author

@Madhu-1 @sp98 @nehaberry updated the PR with the changes. Waiting for the cluster to verify it.

@yati1998 yati1998 requested review from sp98 and Madhu-1 March 11, 2022 09:03
@yati1998
Copy link
Member Author

@nehaberry @Madhu-1 can you revisit this PR.
I have verified the code and all the required details are getting collected.
The details for cephclients and cephfilesystemsubvolumegroups are getting collected in ceph/namespaces/openshift-storage/ceph.rook.io/ and storageconsumer is collected namespaces/openshift-storage/oc_output/

@yati1998
Copy link
Member Author

You need to change the code to get the subvolumes and snapshots also default is set to csi here

as we have a new subvolumegroup we need to loop through it also.

@Madhu-1 created new issue for this: #1596

@@ -100,6 +104,8 @@ done
# NOTE: This is a temporary fix for collecting the storagecluster as we are not able to collect the storagecluster using the inspect command
{ oc get storageclusters -n ${INSTALL_NAMESPACE} -o yaml; } > "$BASE_COLLECTION_PATH/namespaces/${INSTALL_NAMESPACE}/oc_output/storagecluster.yaml" 2>&1
{ oc get storagesystem -n ${INSTALL_NAMESPACE} -o yaml; } > "$BASE_COLLECTION_PATH/namespaces/${INSTALL_NAMESPACE}/oc_output/storagesystem.yaml" 2>&1
{ oc get storageconsumer -n ${INSTALL_NAMESPACE} -o yaml; } > "$BASE_COLLECTION_PATH/namespaces/${INSTALL_NAMESPACE}/oc_output/storageconsumer.yaml" 2>&1
{ oc get cephfilesystemsubvolumegroups.ceph.rook.io -n ${INSTALL_NAMESPACE} -o yaml; } > "$BASE_COLLECTION_PATH/namespaces/${INSTALL_NAMESPACE}/oc_output/cephfilesystemsubvolumegroups.ceph.rook.io.yaml" 2>&1
Copy link
Member

@Madhu-1 Madhu-1 Mar 22, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cephfilesystemsubvolumegroups should be treated the same as cephblockpool (its again Rook specific CRD) I see this is already collected at gather_ceph_resources

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, will remove this part. in gather_ceph_resources only yaml is getting collected and as mentioned by @nehaberry we also need details for oc get and desc of cephfilesystemsubvolumegroups.ceph.rook.io, hence collecting those as well.

this coomit collect yamls and describe outputs of the
new CRs created for ODF to ODF Managed services in 4.10

Signed-off-by: yati1998 <[email protected]>
@yati1998 yati1998 requested a review from Madhu-1 March 22, 2022 11:57
@@ -60,6 +62,8 @@ commands_desc+=("storagecluster")
commands_desc+=("volumesnapshot -A")
commands_desc+=("volumesnapshotclass")
commands_desc+=("volumesnapshotcontent")
commands_desc+=("storageconsumer")
commands_desc+=("cephfilesystemsubvolumegroups.ceph.rook.io")
Copy link
Member

@Madhu-1 Madhu-1 Mar 22, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to remove from here and line 55 as well? i don't see we are doing the same for cephblockpool. can you please confirm

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah we aren't, but as I mentioned that @nehaberry wants to collect output for oc get and desc commands which is not done by ceph_resources. Neha has mentioned the details in bug description.
Do you think that is not required and we can remove that as well?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nehaberry do we need to treat this one more special that other ceph resources?

@agarwal-mudit
Copy link
Member

/retest

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 23, 2022

@yati1998: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/red-hat-storage-ocs-ci-e2e-aws f0ff38d link false /test red-hat-storage-ocs-ci-e2e-aws

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@agarwal-mudit
Copy link
Member

/lgtm
/approve
/cherry-pick release-4.10

@openshift-cherrypick-robot

@agarwal-mudit: once the present PR merges, I will cherry-pick it on top of release-4.10 in a new PR and assign it to you.

In response to this:

/lgtm
/approve
/cherry-pick release-4.10

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Mar 23, 2022
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 23, 2022

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: agarwal-mudit, yati1998

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Mar 23, 2022
@openshift-merge-robot openshift-merge-robot merged commit 8627a31 into red-hat-storage:main Mar 23, 2022
@openshift-cherrypick-robot

@agarwal-mudit: new pull request created: #1599

In response to this:

/lgtm
/approve
/cherry-pick release-4.10

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants