Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PVC's are restored in lost state on NFS storage with CoreOS Tectonic K8s #355

Closed
sands6 opened this issue Mar 6, 2018 · 13 comments
Closed

Comments

@sands6
Copy link

sands6 commented Mar 6, 2018

When I try to restore entire namespace with a test busybox pod with a PV, all resources come back except the PVC would return as in lost state and pod is stuck in container creating status

@sands6
Copy link
Author

sands6 commented Mar 6, 2018

kubectl get pvc -n test
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
rename Lost pvc-a4b20104-215f-11e8-a815-080027399c33 0 az1 5m

@ncdc
Copy link
Contributor

ncdc commented Mar 6, 2018

@sands6 could you please kubectl describe the PVC and the PV?

@sands6
Copy link
Author

sands6 commented Mar 6, 2018

This is the status of the pv before ark restore

kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM         STORAGECLASS   REASON    AGE
pvc-538856a9-2164-11e8-a815-080027399c33   1Gi        RWO            Retain           Released   test/rename   az1                      5m

Once I do ark restore for NS test:

kubectl describe pvc rename -n test
Name:          rename
Namespace:     test
StorageClass:  az1
Status:        Lost
Volume:        pvc-538856a9-2164-11e8-a815-080027399c33
Labels:        ark-restore=arktestns-20180306123835
Annotations:   control-plane.alpha.kubernetes.io/leader={"holderIdentity":"d9678e89-2159-11e8-ba8e-be14ae36d595","leaseDurationSeconds":15,"acquireTime":"2018-03-06T17:32:21Z","renewTime":"2018-03-06T17:32:23Z","lea...
               pv.kubernetes.io/bind-completed=yes
               pv.kubernetes.io/bound-by-controller=yes
               volume.beta.kubernetes.io/storage-class=az1
               volume.beta.kubernetes.io/storage-provisioner=localdev/nfs
Finalizers:    []
Capacity:      0
Access Modes:  
Events:
  Type     Reason         Age   From                         Message
  ----     ------         ----  ----                         -------
  Warning  ClaimMisbound  23s   persistentvolume-controller  Two claims are bound to the same volume, this one is bound incorrectly

kubectl describe pv pvc-538856a9-2164-11e8-a815-080027399c33
Name:         pvc-538856a9-2164-11e8-a815-080027399c33
Labels:       <none>
Annotations:  EXPORT_block=
EXPORT
{
                 Export_Id = 2;
                 Path = /export/pvc-538856a9-2164-11e8-a815-080027399c33;
                 Pseudo = /export/pvc-538856a9-2164-11e8-a815-080027399c33;
                 Access_Type = RW;
                 Squash = no_root_squash...
                 Export_Id=2
                 Project_Id=0
                 Project_block=
                 Provisioner_Id=d9678e89-2159-11e8-ba8d-be14ae36d595
                 kubernetes.io/createdby=nfs-dynamic-provisioner
                 pv.kubernetes.io/provisioned-by=localdev/nfs
                 volume.beta.kubernetes.io/mount-options=vers=4.1
StorageClass:    az1
Status:          Released
Claim:           test/rename
Reclaim Policy:  Retain
Access Modes:    RWO
Capacity:        1Gi
Message:         
Source:
    Type:      NFS (an NFS mount that lasts the lifetime of a pod)
    Server:    10.3.161.242
    Path:      /export/pvc-538856a9-2164-11e8-a815-080027399c33
    ReadOnly:  false
Events:        <none>

@ncdc
Copy link
Contributor

ncdc commented Mar 6, 2018

@sands6 are you trying to restore into the same cluster, and nothing (e.g. PVs) has been deleted?

@sands6
Copy link
Author

sands6 commented Mar 6, 2018

@ncdc yes, for the test purposes. I am trying to restore to the same cluster. I have not taken a snapshot of a PV so i want to make sure PV does not get deleted if I delete pvc.

@ncdc
Copy link
Contributor

ncdc commented Mar 6, 2018

Ok, 2 things here:

  1. We don't support backing up NFS volumes right now. We only support AWS EBS, GCE PD, and Azure Managed Disks, plus anything someone writes a plugin for. We are working on support for any filesystem in any pod/container, but that's not available yet.

  2. We don't currently support restoring PVs into the same cluster - that will be supported once we resolve Support restoring a PV into the same cluster #192.

@sands6
Copy link
Author

sands6 commented Mar 6, 2018

@ncdc do you support Netapp Trident storage orchestrator. Trident exposes NFS volumes as backends via storage classes. Do you have any plan regarding support for Trident?

If I carry my backend nfs and attach to a new cluster and restore NS from Ark, will my test work?

@ncdc
Copy link
Contributor

ncdc commented Mar 6, 2018

We currently directly support AWS EBS, GCE PD, and Azure Managed Disks in Ark itself. If something has an API for doing backups/snapshots/restores of data mounted as a Kubernetes PersistentVolume, it would be possible to write a plugin for Ark for it.

We don't have any current plans for Netapp Trident. Do you know if it has snapshot/restore APIs?

I would expect you to be able to restore into a new cluster and have NFS attach ok. Please try it out and let us know!

@sands6
Copy link
Author

sands6 commented Mar 7, 2018

@ncdc I have tested with NFS on a new cluster and Ark restore is successful. Thanks for the help.

Netapp Trident right now does not support snapshot/restore via API, but will soon in future.

@sands6 sands6 closed this as completed Mar 7, 2018
@depatl
Copy link

depatl commented Nov 14, 2018

@sands6 do u have a procedure how to do Ark restore with NFS (I am using Trident for my PVs)?

@halhelal
Copy link

@ncdc what one can use for on-premise? is NFS back up supported, @sands6 seems to have used NFS for backup and restore. could you please elaborate. our case is disconnected internet with onpremise storage from Netapp via trident, or Glusterfs

@skriss
Copy link
Contributor

skriss commented May 20, 2019

@halhelal if you need to back up PVs and your storage provider doesn't have a Velero plugin, you can use restic (https://velero.io/docs/v1.0.0/restic/) to back up the data. You'll need an object store (typically for on-prem this is some S3-compatible system) to store all of the backup data, both YAML and PV data.

@Nanduyana
Copy link

We currently directly support AWS EBS, GCE PD, and Azure Managed Disks in Ark itself. If something has an API for doing backups/snapshots/restores of data mounted as a Kubernetes PersistentVolume, it would be possible to write a plugin for Ark for it.

We don't have any current plans for Netapp Trident. Do you know if it has snapshot/restore APIs?

I would expect you to be able to restore into a new cluster and have NFS attach ok. Please try it out and let us know!

when i tried (velero restore) with AWS EBS i see the status of the PVC as Lost

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants