Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DOCS-625: Removing FS Mode guidance, cleanups to deploy docs #626

Merged
merged 2 commits into from
Oct 31, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion source/_static/scss/includes/_reset.scss
Original file line number Diff line number Diff line change
Expand Up @@ -287,7 +287,7 @@ dl {
}

dd {
margin: 0;
margin: 0 0 0 1rem;
}
}

Expand Down
72 changes: 16 additions & 56 deletions source/includes/common/installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,69 +14,29 @@ MinIO is a software-defined high performance distributed object storage server.
You can run MinIO on consumer or enterprise-grade hardware and a variety
of operating systems and architectures.

MinIO supports three deployment topologies:
All MinIO deployments implement :ref:`Erasure Coding <minio-erasure-coding>` backends.
You can deploy MinIO using one of the following topologies:

Single-Node Single-Drive (SNSD or "Standalone")
A single MinIO server with a single storage volume or folder.
|SNSD| deployment provides failover protections. Drive-level reliability and failover depends on the underlying storage volume.

|SNSD| deployments are best suited for evaluation and initial development of applications using MinIO for object storage.

|SNSD| deployments implement a zero-parity erasure coding backend and include support for the following erasure-coding dependent features:

- :ref:`Versioning <minio-bucket-versioning>`
- :ref:`Object Locking / Retention <minio-object-retention>`
.. _minio-installation-comparison:

Single-Node Multi-Drive (SNMD or "Standalone Multi-Drive")
A single MinIO server with four or more storage volumes.
|SNMD| deployments provide drive-level reliability and failover only.
:ref:`Single-Node Single-Drive <minio-snsd>` (SNSD or "Standalone")
Local development and evaluation with no/limited reliability

Multi-Node Multi-Drive (MNMD or "Distributed")
Multiple MinIO servers with at least four drives across all servers.
The distributed |MNMD| topology supports production-grade object storage with drive and node-level availability and resiliency.
:ref:`Single-Node Multi-Drive <minio-snmd>` (MNMD or "Standalone Multi-Drive")
Workloads with lower performance, scale, and capacity requirements

For tutorials on deploying or expanding a distributed MinIO deployment, see:
Drive-level reliability with configurable tolerance for loss of up to 1/2 all drives

- :ref:`deploy-minio-distributed`
- :ref:`expand-minio-distributed`
Evaluation of multi-drive topologies and failover behavior.

.. _minio-installation-comparison:
:ref:`Multi-Node Multi-Drive <minio-mnmd>` (MNMD or "Distributed")
Enterprise-grade high-performance object storage

The following table compares the key functional differences between MinIO deployments:

.. list-table::
:header-rows: 1
:width: 100%

* -
- :guilabel:`Single-Node Single-Drive`
- :guilabel:`Single-Node Multi-Drive`
- :guilabel:`Multi-Node Multi-Drive`

* - Site-to-Site Replication
- Client-Side via :mc:`mc mirror`
- :ref:`Server-Side Replication <minio-bucket-replication>`
- :ref:`Server-Side Replication <minio-bucket-replication>`

* - Versioning
- No
- :ref:`Object Versioning <minio-bucket-versioning>`
- :ref:`Object Versioning <minio-bucket-versioning>`

* - Retention
- No
- :ref:`Write-Once Read-Many Locking <minio-bucket-locking>`
- :ref:`Write-Once Read-Many Locking <minio-bucket-locking>`

* - High Availability / Redundancy
- Drive Level Only (RAID and similar)
- Drive Level only with :ref:`Erasure Coding <minio-erasure-coding>`
- Drive and Server-Level with :ref:`Erasure Coding <minio-erasure-coding>`

* - Scaling
- No
- :ref:`Server Pool Expansion <expand-minio-distributed>`
- :ref:`Server Pool Expansion <expand-minio-distributed>`.
Multi Node/Drive level reliability with configurable tolerance for loss of up to 1/2 all nodes/drives

Primary storage for AI/ML, Distributed Query, Analytics, and other Data Lake components

Scalable for Petabyte+ workloads - both storage capacity and performance

Site Replication
----------------
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,33 +11,11 @@ Deploy MinIO: Multi-Node Multi-Drive
:local:
:depth: 1

Overview
--------

A distributed MinIO deployment consists of 4 or more drives/volumes managed by
one or more :mc:`minio server` process, where the processes manage pooling the
compute and storage resources into a single aggregated object storage resource.
Each MinIO server has a complete picture of the distributed topology, such that
an application can connect to any node in the deployment and perform S3
operations.

Distributed deployments implicitly enable :ref:`erasure coding
<minio-erasure-coding>`, MinIO's data redundancy and availability feature that
allows deployments to automatically reconstruct objects on-the-fly despite the
loss of multiple drives or nodes in the cluster. Erasure coding provides
object-level healing with less overhead than adjacent technologies such as RAID
or replication.

Depending on the configured :ref:`erasure code parity <minio-ec-parity>`, a
distributed deployment with ``m`` servers and ``n`` disks per server can
continue serving read and write operations with only ``m/2`` servers or
``m*n/2`` drives online and accessible.

Distributed deployments also support the following features:

- :ref:`Server-Side Object Replication <minio-bucket-replication-serverside>`
- :ref:`Write-Once Read-Many Locking <minio-bucket-locking>`
- :ref:`Object Versioning <minio-bucket-versioning>`
The procedures on this page cover deploying MinIO in a Multi-Node Multi-Drive (MNMD) or "Distributed" configuration.
|MNMD| deployments provide enterprise-grade performance, availability, and scalability and are the recommended topology for all production workloads.

|MNMD| deployments support :ref:`erasure coding <minio-ec-parity>` configurations which tolerate the loss of up to half the nodes or drives in the deployment while continuing to serve read operations.
Use the MinIO `Erasure Code Calculator <https://min.io/product/erasure-code-calculator?ref=docs>`__ when planning and designing your MinIO deployment to explore the effect of erasure code settings on your intended topology.

.. _deploy-minio-distributed-prereqs:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,11 +11,11 @@ Deploy MinIO: Single-Node Multi-Drive
:depth: 2

The procedures on this page cover deploying MinIO in a Single-Node Multi-Drive (SNMD) configuration.
This topology provides increased drive-level reliability and failover protection as compared to :ref:`Single-Node Single-Drive (SNSD) deployments <minio-snsd>`.
|SNMD| deployments provide drive-level reliability and failover/recovery with performance and scaling limitations imposed by the single node.

.. cond:: linux or macos or windows

For production environments, MinIO strongly recommends deploying with the :ref:`Multi-Node Multi-Drive (Distributed) <minio-mnmd>` topology.
For production environments, MinIO strongly recommends deploying with the :ref:`Multi-Node Multi-Drive (Distributed) <minio-mnmd>` topology for enterprise-grade performance, availability, and scalability.

.. cond:: container

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Deploy MinIO: Single-Node Single-Drive
:depth: 2

The procedures on this page cover deploying MinIO in a Single-Node Single-Drive (SNSD) configuration for early development and evaluation.
This mode was previously called :guilabel:`Standalone Mode` or 'filesystem' mode.
|SNSD| deployments provide no added reliability or availability beyond what the underlying storage volume implements (RAID, LVM, ZFS, etc.).

Starting with :minio-release:`RELEASE.2022-06-02T02-11-04Z`, MinIO implements a zero-parity erasure coded backend for single-node single-drive deployments.
This feature allows access to :ref:`erasure coding dependent features <minio-erasure-coding>` without the requirement of multiple drives.
Expand All @@ -27,6 +27,13 @@ See the documentation on :ref:`SNSD behavior with pre-existing data <minio-snsd-

For extended development or production environments, deploy MinIO in a :ref:`Multi-Node Multi-Drive (Distributed) <minio-mnmd>` topology

.. important::

:minio-release:`RELEASE.2022-10-29T06-21-33Z` fully removes the `deprecated Gateway/Filesystem <https://blog.min.io/deprecation-of-the-minio-gateway/>`__ backends.
MinIO returns an error if it starts up and detects existing Filesystem backend files.

To migrate from an FS-backend deployment, use :mc:`mc mirror` or :mc:`mc cp` to copy your data over to a new MinIO |SNSD| deployment.
You should also recreate any necessary users, groups, policies, and bucket configurations on the |SNSD| deployment.

.. _minio-snsd-pre-existing-data:

Expand All @@ -51,12 +58,14 @@ The following table lists the possible storage volume states and MinIO behavior:
* - Existing |SNSD| zero-parity objects and MinIO backend data
- MinIO resumes in |SNSD| mode

* - Existing filesystem folders, files, and MinIO backend data
- MinIO resumes in the legacy filesystem ("Standalone") mode with no erasure-coding features

* - Existing filesystem folders, files, but **no** MinIO backend data
- MinIO returns an error and does not start

* - Existing filesystem folders, files, and legacy "FS-mode" backend data
- MinIO returns an error and does not start

.. versionchanged:: RELEASE.2022-10-29T06-21-33Z

.. _deploy-minio-standalone:

Deploy Single-Node Single-Drive MinIO
Expand Down