You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While I like the concept of snapraid-btrfs, it seems deeply flawed in many respects. The first issue I ran across is I had a filesystem that showed over 1400 non-repairable errors with btrfs scrub. About 40 were in a file which I easily replaced. The other must have been in the metadata.
I know from experience that when there are non-recoverable metadata errors in btrfs, that eventually actions like subvolume delete will make the volume recoverable. Now this seems like a case where it might be snapraid to the rescue. Replace the drive and restore with btrfs. But that leaves two problems.
I didn't want to take the data offline.
I didn't trust the parity, since there were no errors reported in the snapraid log, including ones for the file I had to replace.
I had enough free space on another drive that instead I just loaded the contents of that drive from a primary source into that drive. I then deleted the contents on the failed drive. I then replaced the failed drive, with a freshly formatted drive mounted in place of the original.
The problem is everytime I ran snapraid-btrfs or snapraid-btrfs-runner, it refused to run indicating I needed to pass the --force-empty argument to snapraid. I could find no way with script to do this, and running snapraid directly did not work.
Eventually I deleted all the content files and decided to create brand new parity.
Now the problem I had is I accidently started the runner without invoking "screen" first. So eventually my terminal timed out. No problem I thought, snapraid sync is suppose to just automatically resume if interrupted. But no matter how I tried with snaproad-btrfs and the running I could not get it to sync or resume. I finally just ended up starting all over.
Another flawed concept is the file recovery. If one has a snapshot of data the best way to recover an erased file or such is cp -a --reflink ... Yet the snapraid-btrfs fix command just calls snapraid-btrfs.
So lets say I accidently delete a 4TB folder on a 6TB disk. If I am not savvy enough to know I can copy it from the snapshot using the --reflink option, I will use snapshot-btrfs. That will quickly run out of space on the drive, because effectively files are being stored twice. I cannot just run a new sync to free up space as the README suggests. Because as soon as I do so, the rest of the files I wish to restore will be gone forever.
Ideally what one would want snapraid-btrfs to do is first do a reflink copy to restore missing files, and then optionally repair any bad blocks in place using the snapraid parity. This is of course a far more complicated operation, and if you wish to properly handle all possible error conditions, a shell script is unlikely to prove resilent enough.
The text was updated successfully, but these errors were encountered:
The problem is everytime I ran snapraid-btrfs or snapraid-btrfs-runner, it refused to run indicating I needed to pass the --force-empty argument to snapraid. I could find no way with script to do this, and running snapraid directly did not work.
You can specify the options after the command rather than before and it would work, so something like
snapraid-btrfs sync --force-empty
works while the snapraid-btrfs --force-empty sync will not, which is what I initially did to since that was the command printed out by the original snapraid.
While I like the concept of snapraid-btrfs, it seems deeply flawed in many respects. The first issue I ran across is I had a filesystem that showed over 1400 non-repairable errors with btrfs scrub. About 40 were in a file which I easily replaced. The other must have been in the metadata.
I know from experience that when there are non-recoverable metadata errors in btrfs, that eventually actions like subvolume delete will make the volume recoverable. Now this seems like a case where it might be snapraid to the rescue. Replace the drive and restore with btrfs. But that leaves two problems.
I had enough free space on another drive that instead I just loaded the contents of that drive from a primary source into that drive. I then deleted the contents on the failed drive. I then replaced the failed drive, with a freshly formatted drive mounted in place of the original.
The problem is everytime I ran snapraid-btrfs or snapraid-btrfs-runner, it refused to run indicating I needed to pass the --force-empty argument to snapraid. I could find no way with script to do this, and running snapraid directly did not work.
Eventually I deleted all the content files and decided to create brand new parity.
Now the problem I had is I accidently started the runner without invoking "screen" first. So eventually my terminal timed out. No problem I thought, snapraid sync is suppose to just automatically resume if interrupted. But no matter how I tried with snaproad-btrfs and the running I could not get it to sync or resume. I finally just ended up starting all over.
Another flawed concept is the file recovery. If one has a snapshot of data the best way to recover an erased file or such is cp -a --reflink ... Yet the snapraid-btrfs fix command just calls snapraid-btrfs.
So lets say I accidently delete a 4TB folder on a 6TB disk. If I am not savvy enough to know I can copy it from the snapshot using the --reflink option, I will use snapshot-btrfs. That will quickly run out of space on the drive, because effectively files are being stored twice. I cannot just run a new sync to free up space as the README suggests. Because as soon as I do so, the rest of the files I wish to restore will be gone forever.
Ideally what one would want snapraid-btrfs to do is first do a reflink copy to restore missing files, and then optionally repair any bad blocks in place using the snapraid parity. This is of course a far more complicated operation, and if you wish to properly handle all possible error conditions, a shell script is unlikely to prove resilent enough.
The text was updated successfully, but these errors were encountered: