-
Notifications
You must be signed in to change notification settings - Fork 302
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEATURE] Document recommended migration strategy from fs backend #624
Comments
Maybe I am just missing it, but I don't really see anything in the documentation that describes what these backends are ( |
I presume description of these backends is given in https://min.io/docs/minio/linux/operations/install-deploy-manage/deploy-minio-single-node-single-drive.html:
As pointed out above, "don't upgrade" doesn't really work. Migration description should also take into account procedures ensuring that the users and policies are as they were in the current install. |
Thanks - I didn't see anything that directly mapped the config file values (fs, xl, xl-single) but that makes sense now. I understand not having much documentation when it's more of an implementation detail that users don't need to know about but with the removal of the code that supports filesystem code in newer versions, this is now something that's good to know. |
Let clear up a bit, based the docs. Before June 2022
After June 2022
And if your deployment at beginning is multidrive -> distibuted mode by default At Replicate:
Deploy minio with older than june release, im using
And redeploy with
New deploy using
|
Hey folks! Thanks for opening this issue. We're looking into an in-depth guide along with updating our current docs to note that FS is now fully deprecated. The short-term solution is to use
@jtackaberry basically has it right. There are some safety bars we'd document along the way to try and help prevent stuff from getting left behind, and we can talk about migrating to S3 clients to help ease migrating workloads that previously relied on the POSIX access. We're discussing internally some scoping here - there are a lot of ways a person could deploy in FS mode. Do we try to cover fixing a setup where someone deployed to the root drive and now needs to migrate to subfolders? Should we try to address the NAS deploys when we don't have access to the necessary hardware to test what might go wrong? Do we have to worry about something set up on top of RAID, LVM, ZFS, etc? OR should it just work now, given a few assumptions around the storage itself. It takes a little bit to chew and test through all of those things. And we'd be more than happy to take community feedback from folks willing to try and see what works. We can't promise that we will intake everything, but it helps to know that given a baseline of guidance, that users can try to work through the rest. As an aside:
While true of the XL backend, we'd love to document this (and other "internal") things in more detail. It's really cool stuff! Erasure Coding is cool! Replication internals are cool! The scanner is cool! We just have limited resources, and a lot to write. So some cool internal things get put on the backlog so we can focus on features like Batch Replication, or keeping our Kubernetes docs updated as we continue to iterate and improve the MinIO Operator. |
@ravindk89: Thank you very much for looking into it! On our setup, I would expect that there is some way to migrate to the new storage backend without any major assumptions regarding the filesystem behind it. We use ZFS, but I would expect that any regular Linux FS should be generally fine. NFS may have issues, but NFS users should probably be used to it anyway. It doesn't look to be realistic to test all options for you in house. So, we, as a community, could mainly contribute via reporting back on how it went. As for migration, we need also a mechanism to copy all the users. From reading the docs, it seems that we could use site replication. As far as I can see, there is no other mechanism for transferring user login data and the policies. However, replication does introduce versioning to all buckets, as far as I understand. So, if there is a clever way
Or am I missing something and there is a catch in such approach/or better way? Would be great if someone who knows the API can compose a script that would do it or find some other solution. |
Site Replication would not work, as the FS-mode does not support it (or any other versioning-dependent feature). This is a big reason why we want to fully deprecate FS/Gateway mode, as it means we can make a number of assumptions moving forward on what our user base works with. Migration at this point would require a series of steps - exporting IAM, users, groups, etc. using |
Makes sense. With the exporting IAM, users and such - is it possible to export secret keys as well? I presume not really. Although, as far as I can see, secret key is in cleartext in MinIO .minio.sys subfolders. So, those can be somehow scraped as well. Edit: hmm, that clear text is only for few users - probably some first ones. Later it has |
So for that, it depends on whether you enabled SSE encryption! I thought we'd go through and encrypt all user data, but maybe it's only for users created after setting up SSE - will have to look into that. SSE-enabled FS backends is a whooole bugbear to wrestle :D |
Yes, we have |
OK - I think you would have to export and re-create the users either way vs a direct copy - you can just use the same encryption key. We're going to stick to basics using |
Thanks! Looking forward to get the instruction and hoping that the secret keys (in the encrypted form) can be transferred over as well without going around and asking all users to reenter the keys. If it requires the same KMS key - no problem :) |
Closes #624 Creates a new page under install-deploy-manage Operator docs. This page summarizes the changes that led to the deprecation of the Gateway/Filesystem. It overviews the steps required to create a new Single-Node, Single-Drive deployment with the contents, settings, and IAM policies from the old deployment.
Closes #624 Creates a new page under install-deploy-manage Operator docs. This page summarizes the changes that led to the deprecation of the Gateway/Filesystem. It overviews the steps required to create a new Single-Node, Single-Drive deployment with the contents, settings, and IAM policies from the old deployment.
Yes, for NFS it fails completely, as reported here: minio/minio#16163 (the issue is not resolved, and has been treated arguably rather superficially before closing with a tag "working as intended"). |
Now that the fs backend has been removed in
RELEASE.2022-10-29T06-21-33Z
, there should be some documented strategy on how users might migrate off the fs backend to a supported backend.Based on the discussion at minio/minio#15967, "don't upgrade" is the current documented recommendation, but this isn't a viable strategy unless there will be an automated migration path introduced in the future.
Perhaps the only solution is for users to create a fresh parallel deployment and explicitly copy all the objects over. Whatever the recommendation is, it should be documented to give users a path to continue to receive updates and security fixes.
The text was updated successfully, but these errors were encountered: