Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Where we need to set PV storage class #77

Closed
kishore0709 opened this issue Apr 15, 2021 · 13 comments
Closed

Where we need to set PV storage class #77

kishore0709 opened this issue Apr 15, 2021 · 13 comments
Labels

Comments

@kishore0709
Copy link

Hi Team,

Please let me know in which file we need to put "persistentVolume.dbStorageClass=your-topology-aware-storage-class-name" this entry.

Thanks!
Kishor.

@dwbrown2
Copy link
Contributor

Hi Kishor, you can set with these two fields:

persistentVolume.storageClass
prometheus.server.persistentVolume.storageClass

More info here: https://github.com/kubecost/cost-analyzer-helm-chart/blob/971a0079ff75e8397aa9627086a5a8fa49c21c56/README.md

Does that help you?

@kishore0709
Copy link
Author

Hi @dwbrown2

I have updated those two fields and I pre-created PV and PVC for kubecost, but when I run kubecost helm it is getting failed with following error:

Could you please suggest how we can resolve this issue.

Error:

Error: rendered manifests contain a resource that already exists. Unable to continue with install: PersistentVolumeClaim "kubecost-prometheus-server" in namespace "kubecost" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "kubecost"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "kubecost"

PV & PVC output:

image

Persistent volume field:

image

image

Sample PVC manifesr:

image

@dwbrown2
Copy link
Contributor

@kishore0709 did you mean to close this issue?

@kishore0709
Copy link
Author

kishore0709 commented Apr 16, 2021

Ah sorry it was by mistake.

Could you please help suggest how we can resolve the issue which I mentioned in my last comment.

Thanks!
Kishor.

@kishore0709 kishore0709 reopened this Apr 16, 2021
@dwbrown2
Copy link
Contributor

Are you trying to using an existing PV or create a new one? If you are trying to us an existing then you shouldn't need to set storageClass. If you are trying to create a new one, then you can drop the existingClaim reference.

If this not accurate, can you please provide your full values file?

@kishore0709
Copy link
Author

@dwbrown2

I tried both ways(existing PV as well as new PV).
Note:
My storage class is EFS, does our kubecost and prometheus supports efs csi ?

1. Existing PV :

As I updated in the previous comment I created PV & PVC for both analyzer and Prometheus pods

and I mentioned that existing PVC in values file(I commented out the storage class line as per your suggestion)

image

But still kubecost helm is getting failed with same error:

Error:

Error: rendered manifests contain a resource that already exists. Unable to continue with install: PersistentVolumeClaim "kubecost-prometheus-server" in namespace "kubecost" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "kubecost"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "kubecost"

Note: my PVC manifest file already has required labels.

image

2. New PVC:

When I'm trying to create new PVCs along with kubecost deployment PVCs are getting attached to PVs and pods are trying to start but they are always showing "ContainerCreating" status.

It is showing the following error:

Error:

22s Warning FailedMount pod/kubecost-prometheus-server-574c76dd-ph6x5 MountVolume.SetUp failed for volume "kubecost-cost-analyzer" : kubernetes.io/csi: mounter.SetupAt failed: rpc error: code = Internal desc = Could not mount "fs-xxxxxx:/" at "/var/lib/kubelet/pods/6fe8b7a8-624e-4f71-8479-e093d96e39b0/volumes/kubernetes.iocsi/kubecost-cost-analyzer/mount": mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t efs -o tls fs-xxxxxx:/ /var/lib/kubelet/pods/6fe8b7a8-624e-4f71-8479-e093d96e39b0/volumes/kubernetes.io
csi/kubecost-cost-analyzer/mount
Output: Could not start amazon-efs-mount-watchdog, unrecognized init system "aws-efs-csi-dri"
mount.nfs4: Connection reset by peer

7s Warning FailedMount pod/kubecost-prometheus-server-574c76dd-ph6x5 Unable to attach or mount volumes: unmounted volumes=[storage-volume], unattached volumes=[config-volume kubecost-prometheus-server-token-jhpr2 storage-volume]: timed out waiting for the condition

@dwbrown2
Copy link
Contributor

Can you please share the full set of values that you are supplying?

The kubecost-prometheus-server error is not related to the kubecost PV which you are trying to set in this example yaml.

@kishore0709
Copy link
Author

@dwbrown2

I'm not able to send .yaml files here.

Below is the exact error I'm getting for kubecost-prometheus-server

Error:

Output: Could not start amazon-efs-mount-watchdog, unrecognized init system "aws-efs-csi-dri"
mount.nfs4: Connection reset by peer
7s Warning FailedMount pod/kubecost-prometheus-server-574c76dd-ph6x5 Unable to attach or mount volumes: unmounted volumes=[storage-volume], unattached volumes=[config-volume kubecost-prometheus-server-token-jhpr2 storage-volume]: timed out waiting for the condition

@dwbrown2
Copy link
Contributor

It's hard for me to say without seeing the values you are actually passing.

I found this issue, which may be related -- kubernetes-sigs/aws-efs-csi-driver#103

Are there already PVs or PVCs created in the namespace? If so, I would deleted them.

@Adam-Stack-PM
Copy link
Contributor

This issue has been marked as stale because it has not had recent activity. It will be closed if no further action occurs.

@pierluigilenoci
Copy link

@dwbrown2 @kishore0709 any news about this?

@pierluigilenoci
Copy link

@bstuder99, why was this closed?

@brstuder
Copy link
Contributor

The original account which submitted the issue has been unresponsive and after having been marked stale for several months with no additional activity, we are closing the thread. If you are having a similar problem, feel free to create a new issue and I will look over it when I can.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants