Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

wait completion of staging blocks in writeback mode #2644

Closed
davies opened this issue Aug 30, 2022 · 5 comments · Fixed by #3224
Closed

wait completion of staging blocks in writeback mode #2644

davies opened this issue Aug 30, 2022 · 5 comments · Fixed by #3224
Assignees
Labels
kind/feature New feature or request

Comments

@davies
Copy link
Contributor

davies commented Aug 30, 2022

In writeback mode, we may want all the staging blocks to be uploaded before umount.

We can use the FUSE DESTROY operation as an signal to flush all staging blocks: torvalds/linux@0ec7ca4

@davies davies added the kind/feature New feature or request label Aug 30, 2022
@Hexilee Hexilee self-assigned this Nov 11, 2022
@Hexilee
Copy link
Contributor

Hexilee commented Nov 16, 2022

I'm working on this issue, as the go-fuse does not support DESTROY operation, here are three possible alternatives:

  1. Wait for uploading in the signal handler.
  2. Wait for uploading before the FUSE server exit.
  3. Don't wait, but provide a subcommand like flush to upload data.

The first solution works well with the K8S CSI but does nothing when users execute the umount command. The second solution can keep uploading data when users execute the umount command, but users cannot umount immediately as we cannot deliver arguments to juicefs by umount. And the last solution works well in all cases, but users must execute the subcommand explicitly.

@vicaya
Copy link

vicaya commented Dec 22, 2022

Hi @Hexilee, any updates? As a variation of option 2, juicefs already has umount subcommand with -f option. It appears that you could implement the logic there without introducing a new subcommand. A new subcommand like flush is a nice to have as well.

@Hexilee
Copy link
Contributor

Hexilee commented Dec 30, 2022

Hi @Hexilee, any updates? As a variation of option 2, juicefs already has umount subcommand with -f option. It appears that you could implement the logic there without introducing a new subcommand. A new subcommand like flush is a nice to have as well.

I prefer option 3 personally. If we implement it in juicefs umount, the normal umount won't work.

@vicaya
Copy link

vicaya commented Jan 2, 2023

If we implement it in juicefs umount, the normal umount won't work.

The normal umount wouldn't work safely in any case until fuse sync is implemented everywhere. The problem of a separate flush command is the potential data loss (of successful write(2)) due to the plausible race between flush and umount. You can implement the juicefs umount free of such race by first disabling writes and then flush and wait for flush to complete and unmount.

In any case, people who care about these details just want to have a reliable way to prevent data loss (e.g. a writing pod could be spawned elsewhere and retry the writes correctly).

@Hexilee
Copy link
Contributor

Hexilee commented Feb 8, 2023

If we implement it in juicefs umount, the normal umount won't work.

The normal umount wouldn't work safely in any case until fuse sync is implemented everywhere. The problem of a separate flush command is the potential data loss (of successful write(2)) due to the plausible race between flush and umount. You can implement the juicefs umount free of such race by first disabling writes and then flush and wait for flush to complete and unmount.

In any case, people who care about these details just want to have a reliable way to prevent data loss (e.g. a writing pod could be spawned elsewhere and retry the writes correctly).

Reasonable, I'm going to implement this feature in the umount subcommand.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants