diff --git a/source/_static/scss/includes/_reset.scss b/source/_static/scss/includes/_reset.scss index 85021407..253cc348 100644 --- a/source/_static/scss/includes/_reset.scss +++ b/source/_static/scss/includes/_reset.scss @@ -287,7 +287,7 @@ dl { } dd { - margin: 0; + margin: 0 0 0 1rem; } } diff --git a/source/includes/common/installation.rst b/source/includes/common/installation.rst index f1d1e0ce..d1e184a8 100644 --- a/source/includes/common/installation.rst +++ b/source/includes/common/installation.rst @@ -14,69 +14,29 @@ MinIO is a software-defined high performance distributed object storage server. You can run MinIO on consumer or enterprise-grade hardware and a variety of operating systems and architectures. -MinIO supports three deployment topologies: +All MinIO deployments implement :ref:`Erasure Coding ` backends. +You can deploy MinIO using one of the following topologies: -Single-Node Single-Drive (SNSD or "Standalone") - A single MinIO server with a single storage volume or folder. - |SNSD| deployment provides failover protections. Drive-level reliability and failover depends on the underlying storage volume. - - |SNSD| deployments are best suited for evaluation and initial development of applications using MinIO for object storage. - - |SNSD| deployments implement a zero-parity erasure coding backend and include support for the following erasure-coding dependent features: - - - :ref:`Versioning ` - - :ref:`Object Locking / Retention ` +.. _minio-installation-comparison: -Single-Node Multi-Drive (SNMD or "Standalone Multi-Drive") - A single MinIO server with four or more storage volumes. - |SNMD| deployments provide drive-level reliability and failover only. +:ref:`Single-Node Single-Drive ` (SNSD or "Standalone") + Local development and evaluation with no/limited reliability -Multi-Node Multi-Drive (MNMD or "Distributed") - Multiple MinIO servers with at least four drives across all servers. - The distributed |MNMD| topology supports production-grade object storage with drive and node-level availability and resiliency. +:ref:`Single-Node Multi-Drive ` (MNMD or "Standalone Multi-Drive") + Workloads with lower performance, scale, and capacity requirements - For tutorials on deploying or expanding a distributed MinIO deployment, see: + Drive-level reliability with configurable tolerance for loss of up to 1/2 all drives - - :ref:`deploy-minio-distributed` - - :ref:`expand-minio-distributed` + Evaluation of multi-drive topologies and failover behavior. -.. _minio-installation-comparison: +:ref:`Multi-Node Multi-Drive ` (MNMD or "Distributed") + Enterprise-grade high-performance object storage -The following table compares the key functional differences between MinIO deployments: - -.. list-table:: - :header-rows: 1 - :width: 100% - - * - - - :guilabel:`Single-Node Single-Drive` - - :guilabel:`Single-Node Multi-Drive` - - :guilabel:`Multi-Node Multi-Drive` - - * - Site-to-Site Replication - - Client-Side via :mc:`mc mirror` - - :ref:`Server-Side Replication ` - - :ref:`Server-Side Replication ` - - * - Versioning - - No - - :ref:`Object Versioning ` - - :ref:`Object Versioning ` - - * - Retention - - No - - :ref:`Write-Once Read-Many Locking ` - - :ref:`Write-Once Read-Many Locking ` - - * - High Availability / Redundancy - - Drive Level Only (RAID and similar) - - Drive Level only with :ref:`Erasure Coding ` - - Drive and Server-Level with :ref:`Erasure Coding ` - - * - Scaling - - No - - :ref:`Server Pool Expansion ` - - :ref:`Server Pool Expansion `. + Multi Node/Drive level reliability with configurable tolerance for loss of up to 1/2 all nodes/drives + + Primary storage for AI/ML, Distributed Query, Analytics, and other Data Lake components + + Scalable for Petabyte+ workloads - both storage capacity and performance Site Replication ---------------- diff --git a/source/operations/install-deploy-manage/deploy-minio-multi-node-multi-drive.rst b/source/operations/install-deploy-manage/deploy-minio-multi-node-multi-drive.rst index 1df8668e..ef27c2c1 100644 --- a/source/operations/install-deploy-manage/deploy-minio-multi-node-multi-drive.rst +++ b/source/operations/install-deploy-manage/deploy-minio-multi-node-multi-drive.rst @@ -11,33 +11,11 @@ Deploy MinIO: Multi-Node Multi-Drive :local: :depth: 1 -Overview --------- - -A distributed MinIO deployment consists of 4 or more drives/volumes managed by -one or more :mc:`minio server` process, where the processes manage pooling the -compute and storage resources into a single aggregated object storage resource. -Each MinIO server has a complete picture of the distributed topology, such that -an application can connect to any node in the deployment and perform S3 -operations. - -Distributed deployments implicitly enable :ref:`erasure coding -`, MinIO's data redundancy and availability feature that -allows deployments to automatically reconstruct objects on-the-fly despite the -loss of multiple drives or nodes in the cluster. Erasure coding provides -object-level healing with less overhead than adjacent technologies such as RAID -or replication. - -Depending on the configured :ref:`erasure code parity `, a -distributed deployment with ``m`` servers and ``n`` disks per server can -continue serving read and write operations with only ``m/2`` servers or -``m*n/2`` drives online and accessible. - -Distributed deployments also support the following features: - -- :ref:`Server-Side Object Replication ` -- :ref:`Write-Once Read-Many Locking ` -- :ref:`Object Versioning ` +The procedures on this page cover deploying MinIO in a Multi-Node Multi-Drive (MNMD) or "Distributed" configuration. +|MNMD| deployments provide enterprise-grade performance, availability, and scalability and are the recommended topology for all production workloads. + +|MNMD| deployments support :ref:`erasure coding ` configurations which tolerate the loss of up to half the nodes or drives in the deployment while continuing to serve read operations. +Use the MinIO `Erasure Code Calculator `__ when planning and designing your MinIO deployment to explore the effect of erasure code settings on your intended topology. .. _deploy-minio-distributed-prereqs: diff --git a/source/operations/install-deploy-manage/deploy-minio-single-node-multi-drive.rst b/source/operations/install-deploy-manage/deploy-minio-single-node-multi-drive.rst index 861000aa..0293d957 100644 --- a/source/operations/install-deploy-manage/deploy-minio-single-node-multi-drive.rst +++ b/source/operations/install-deploy-manage/deploy-minio-single-node-multi-drive.rst @@ -11,11 +11,11 @@ Deploy MinIO: Single-Node Multi-Drive :depth: 2 The procedures on this page cover deploying MinIO in a Single-Node Multi-Drive (SNMD) configuration. -This topology provides increased drive-level reliability and failover protection as compared to :ref:`Single-Node Single-Drive (SNSD) deployments `. +|SNMD| deployments provide drive-level reliability and failover/recovery with performance and scaling limitations imposed by the single node. .. cond:: linux or macos or windows - For production environments, MinIO strongly recommends deploying with the :ref:`Multi-Node Multi-Drive (Distributed) ` topology. + For production environments, MinIO strongly recommends deploying with the :ref:`Multi-Node Multi-Drive (Distributed) ` topology for enterprise-grade performance, availability, and scalability. .. cond:: container diff --git a/source/operations/install-deploy-manage/deploy-minio-single-node-single-drive.rst b/source/operations/install-deploy-manage/deploy-minio-single-node-single-drive.rst index 3ec302bc..542f33b7 100644 --- a/source/operations/install-deploy-manage/deploy-minio-single-node-single-drive.rst +++ b/source/operations/install-deploy-manage/deploy-minio-single-node-single-drive.rst @@ -11,7 +11,7 @@ Deploy MinIO: Single-Node Single-Drive :depth: 2 The procedures on this page cover deploying MinIO in a Single-Node Single-Drive (SNSD) configuration for early development and evaluation. -This mode was previously called :guilabel:`Standalone Mode` or 'filesystem' mode. +|SNSD| deployments provide no added reliability or availability beyond what the underlying storage volume implements (RAID, LVM, ZFS, etc.). Starting with :minio-release:`RELEASE.2022-06-02T02-11-04Z`, MinIO implements a zero-parity erasure coded backend for single-node single-drive deployments. This feature allows access to :ref:`erasure coding dependent features ` without the requirement of multiple drives. @@ -27,6 +27,13 @@ See the documentation on :ref:`SNSD behavior with pre-existing data ` topology +.. important:: + + :minio-release:`RELEASE.2022-10-29T06-21-33Z` fully removes the `deprecated Gateway/Filesystem `__ backends. + MinIO returns an error if it starts up and detects existing Filesystem backend files. + + To migrate from an FS-backend deployment, use :mc:`mc mirror` or :mc:`mc cp` to copy your data over to a new MinIO |SNSD| deployment. + You should also recreate any necessary users, groups, policies, and bucket configurations on the |SNSD| deployment. .. _minio-snsd-pre-existing-data: @@ -51,12 +58,14 @@ The following table lists the possible storage volume states and MinIO behavior: * - Existing |SNSD| zero-parity objects and MinIO backend data - MinIO resumes in |SNSD| mode - * - Existing filesystem folders, files, and MinIO backend data - - MinIO resumes in the legacy filesystem ("Standalone") mode with no erasure-coding features - * - Existing filesystem folders, files, but **no** MinIO backend data - MinIO returns an error and does not start + * - Existing filesystem folders, files, and legacy "FS-mode" backend data + - MinIO returns an error and does not start + + .. versionchanged:: RELEASE.2022-10-29T06-21-33Z + .. _deploy-minio-standalone: Deploy Single-Node Single-Drive MinIO