Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE] Document recommended migration strategy from fs backend #624

Closed
jtackaberry opened this issue Oct 29, 2022 · 13 comments · Fixed by #628
Closed

[FEATURE] Document recommended migration strategy from fs backend #624

jtackaberry opened this issue Oct 29, 2022 · 13 comments · Fixed by #628

Comments

@jtackaberry
Copy link

Now that the fs backend has been removed in RELEASE.2022-10-29T06-21-33Z, there should be some documented strategy on how users might migrate off the fs backend to a supported backend.

Based on the discussion at minio/minio#15967, "don't upgrade" is the current documented recommendation, but this isn't a viable strategy unless there will be an automated migration path introduced in the future.

Perhaps the only solution is for users to create a fresh parallel deployment and explicitly copy all the objects over. Whatever the recommendation is, it should be documented to give users a path to continue to receive updates and security fixes.

@jtackaberry jtackaberry added the triage Needs triage and scheduling label Oct 29, 2022
@mbentley
Copy link

Maybe I am just missing it, but I don't really see anything in the documentation that describes what these backends are (fs, xl, and xl-single). "Don't upgrade" isn't a strategy. I don't want to wait until there is a vulnerability discovered in that older version that can't be patched and then be left scrambling for a solution at that point.

@markovendelin
Copy link

I presume description of these backends is given in https://min.io/docs/minio/linux/operations/install-deploy-manage/deploy-minio-single-node-single-drive.html:

  • fs - "Standalone Mode or ‘filesystem’ mode."
  • xl / xl-single - not very clear, but maybe described in the 2nd paragraph

As pointed out above, "don't upgrade" doesn't really work. Migration description should also take into account procedures ensuring that the users and policies are as they were in the current install.

@mbentley
Copy link

Thanks - I didn't see anything that directly mapped the config file values (fs, xl, xl-single) but that makes sense now. I understand not having much documentation when it's more of an implementation detail that users don't need to know about but with the removal of the code that supports filesystem code in newer versions, this is now something that's good to know.

@martadinata666
Copy link

martadinata666 commented Oct 31, 2022

Let clear up a bit, based the docs.

Before June 2022

  • single drive deployment -> FS mode (called standalone mode)

After June 2022

  • single drive deployment -> xl-single (called distributed mode)

And if your deployment at beginning is multidrive -> distibuted mode by default xl, (this ripped out code wont affect)

At RELEASE.2022-10-29T06-21-33Z ripped out fs mode, left minio deployment on single drive before June 2022 borked. And one the strategy is Don't Upgrade at least that is how developer suggest. So yeah, we can just wait in future, will the fs implemented back "likely wont happen" or "will be migration guide some day".

Replicate:

version: "3.8"
services:
  s3:
    image: quay.io/minio/minio:RELEASE.2022-05-26T05-48-41Z
    #image: quay.io/minio/minio:RELEASE.2022-10-29T06-21-33Z
    command: server /data --console-address ":9001"
    user: 1000:1000
    environment:
      - MINIO_ROOT_USER=minioadmin
      - MINIO_ROOT_PASSWORD=minioadmin
    #ports:
    #  - 9000:9000
    #  - 9001:9001
    volumes:
      - ./data:/data

Deploy minio with older than june release, im using RELEASE.2022-05-26T05-48-41Z it will default to FS mode, lookup to .minio.sys/format.json

{"version":"1","format":"fs","id":"61cccbcb-16ce-4b4d-8eac-2f154a795c6e","fs":{"version":"2"}}

And redeploy with RELEASE.2022-10-29T06-21-33Z

ERROR Unable to use the drive /data: Drive /data: found backend type fs, expected xl or xl-single: Invalid arguments specified

New deploy using RELEASE.2022-10-29T06-21-33Z, .minio.sys/format.json will show

{"version":"1","format":"xl-single","id":"11d2b225-782c-411d-b3b6-212536ccbe79","xl":{"version":"3","this":"f2e61ee3-b5f8-470a-9e84-b98384b832d0","sets":[["f2e61ee3-b5f8-470a-9e84-b98384b832d0"]],"distributionAlgo":"SIPMOD+PARITY"}}%

@ravindk89
Copy link
Collaborator

ravindk89 commented Oct 31, 2022

Hey folks! Thanks for opening this issue. We're looking into an in-depth guide along with updating our current docs to note that FS is now fully deprecated.

The short-term solution is to use mc to copy data over to a newer SNSD setup. mc - well, really any tool that can talk the S3 API - can handle copying the files over. As long as the SNSD setup receives a GET op, it does the rest. So anything like mc cp --recursive FS/BUCKET SNSD/BUCKET or mc mirror --watch FS/BUCKET SNSD/BUCKET would work.

Perhaps the only solution is for users to create a fresh parallel deployment and explicitly copy all the objects over.

@jtackaberry basically has it right. There are some safety bars we'd document along the way to try and help prevent stuff from getting left behind, and we can talk about migrating to S3 clients to help ease migrating workloads that previously relied on the POSIX access.

We're discussing internally some scoping here - there are a lot of ways a person could deploy in FS mode. Do we try to cover fixing a setup where someone deployed to the root drive and now needs to migrate to subfolders? Should we try to address the NAS deploys when we don't have access to the necessary hardware to test what might go wrong? Do we have to worry about something set up on top of RAID, LVM, ZFS, etc? OR should it just work now, given a few assumptions around the storage itself.

It takes a little bit to chew and test through all of those things. And we'd be more than happy to take community feedback from folks willing to try and see what works. We can't promise that we will intake everything, but it helps to know that given a baseline of guidance, that users can try to work through the rest.

As an aside:

it's more of an implementation detail that users don't need to know about

While true of the XL backend, we'd love to document this (and other "internal") things in more detail. It's really cool stuff! Erasure Coding is cool! Replication internals are cool! The scanner is cool!

We just have limited resources, and a lot to write. So some cool internal things get put on the backlog so we can focus on features like Batch Replication, or keeping our Kubernetes docs updated as we continue to iterate and improve the MinIO Operator.

@ravindk89 ravindk89 added priority: high community and removed triage Needs triage and scheduling labels Oct 31, 2022
@markovendelin
Copy link

@ravindk89: Thank you very much for looking into it!

On our setup, I would expect that there is some way to migrate to the new storage backend without any major assumptions regarding the filesystem behind it. We use ZFS, but I would expect that any regular Linux FS should be generally fine. NFS may have issues, but NFS users should probably be used to it anyway. It doesn't look to be realistic to test all options for you in house. So, we, as a community, could mainly contribute via reporting back on how it went.

As for migration, we need also a mechanism to copy all the users. From reading the docs, it seems that we could use site replication. As far as I can see, there is no other mechanism for transferring user login data and the policies. However, replication does introduce versioning to all buckets, as far as I understand. So, if there is a clever way

  • to list all versioned/non-versioned buckets in original site
  • replicate to new install
  • stop old minio and break replication link between sites
  • remove all versioning from buckets that did not use it before and wipe older versions that stayed for those buckets

Or am I missing something and there is a catch in such approach/or better way? Would be great if someone who knows the API can compose a script that would do it or find some other solution.

@ravindk89
Copy link
Collaborator

Site Replication would not work, as the FS-mode does not support it (or any other versioning-dependent feature). This is a big reason why we want to fully deprecate FS/Gateway mode, as it means we can make a number of assumptions moving forward on what our user base works with.

Migration at this point would require a series of steps - exporting IAM, users, groups, etc. using mc commands. There's no one-stop migration command to run here (unfortunately).

@markovendelin
Copy link

markovendelin commented Oct 31, 2022

Makes sense. With the exporting IAM, users and such - is it possible to export secret keys as well? I presume not really. Although, as far as I can see, secret key is in cleartext in MinIO .minio.sys subfolders. So, those can be somehow scraped as well.

Edit: hmm, that clear text is only for few users - probably some first ones. Later it has kmskey instead.

@ravindk89
Copy link
Collaborator

So for that, it depends on whether you enabled SSE encryption! I thought we'd go through and encrypt all user data, but maybe it's only for users created after setting up SSE - will have to look into that.

SSE-enabled FS backends is a whooole bugbear to wrestle :D

@markovendelin
Copy link

Yes, we have MINIO_KMS_SECRET_KEY set.

@ravindk89
Copy link
Collaborator

OK - I think you would have to export and re-create the users either way vs a direct copy - you can just use the same encryption key.

We're going to stick to basics using mc with as few POSIX-level ops as possible.

@markovendelin
Copy link

Thanks! Looking forward to get the instruction and hoping that the secret keys (in the encrypted form) can be transferred over as well without going around and asking all users to reenter the keys. If it requires the same KMS key - no problem :)

djwfyi added a commit that referenced this issue Oct 31, 2022
Closes #624

Creates a new page under install-deploy-manage Operator docs.
This page summarizes the changes that led to the deprecation of the Gateway/Filesystem.
It overviews the steps required to create a new Single-Node, Single-Drive deployment
with the contents, settings, and IAM policies from the old deployment.
djwfyi added a commit that referenced this issue Nov 1, 2022
Closes #624

Creates a new page under install-deploy-manage Operator docs. This page
summarizes the changes that led to the deprecation of the
Gateway/Filesystem. It overviews the steps required to create a new
Single-Node, Single-Drive deployment with the contents, settings, and
IAM policies from the old deployment.
@mirekphd
Copy link

mirekphd commented Dec 3, 2022

NFS may have issues, but NFS users should probably be used to it anyway. It doesn't look to be realistic to test all options for you in house. So, we, as a community, could mainly contribute via reporting back on how it went.

Yes, for NFS it fails completely, as reported here: minio/minio#16163 (the issue is not resolved, and has been treated arguably rather superficially before closing with a tag "working as intended").

@minio minio locked as resolved and limited conversation to collaborators Dec 3, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants