Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -2,15 +2,13 @@
//
// * monitoring/cluster-monitoring/configuring-the-monitoring-stack.adoc

[id="configuring-a-persistent-volume-claim_{context}"]
= Configuring a persistent volume claim
[id="configuring-a-local-persistent-volume-claim_{context}"]
= Configuring a local persistent volume claim

For the Prometheus or Alertmanager to use a persistent volume (PV), you first must configure a persistent volume claim (PVC).

.Prerequisites

* Make sure you have the necessary storage class configured.
// FIXME add link, potentially https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/container-native_storage_for_openshift_container_platform/block_storage
* Make sure you have the `cluster-monitoring-config` ConfigMap object with the `data/config.yaml` section.

.Procedure
Expand Down Expand Up @@ -40,12 +38,12 @@ data:
storageClassName: *_storage class_*
resources:
requests:
storage: *_40Gi_*
storage: *_amount of storage_*
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you, implemented this in #16978.

----
+
See the link:https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims[Kubernetes documentation on PersistentVolumeClaims] for information on how to specify `volumeClaimTemplate`.
+
For example, to configure a PVC that claims any configured {product-title} block PV as a persistent storage for Prometheus, use:
For example, to configure a PVC that claims local persistent storage for Prometheus, use:
+
[source,yaml,subs=quotes]
----
Expand All @@ -59,15 +57,17 @@ data:
*prometheusK8s*:
volumeClaimTemplate:
metadata:
name: *my-prometheus-claim*
name: *localpvc*
spec:
storageClassName: *gluster-block*
storageClassName: *local-storage*
resources:
requests:
storage: *40Gi*
----
+
And to configure a PVC that claims any configured {product-title} block PV as a persistent storage for Alertmanager, you can use:
In the above example, the storage class created by the Local Storage Operator is called `local-storage`.
+
To configure a PVC that claims local persistent storage for Alertmanager, use:
+
[source,yaml,subs=quotes]
----
Expand All @@ -81,9 +81,9 @@ data:
*alertmanagerMain*:
volumeClaimTemplate:
metadata:
name: *my-alertmanager-claim*
name: *localpvc*
spec:
storageClassName: *gluster-block*
storageClassName: *local-storage*
resources:
requests:
storage: *40Gi*
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ include::modules/monitoring-moving-monitoring-components-to-different-nodes.adoc
[id="configuring-persistent-storage"]
== Configuring persistent storage

Running cluster monitoring with persistent storage means that your metrics are stored to a Persistent Volume and can survive a pod being restarted or recreated. This is ideal if you require your metrics or alerting data to be guarded from data loss. For production environments, it is highly recommended to configure persistent storage.
Running cluster monitoring with persistent storage means that your metrics are stored to a Persistent Volume and can survive a pod being restarted or recreated. This is ideal if you require your metrics or alerting data to be guarded from data loss. For production environments, it is highly recommended to configure persistent storage. Because of the high IO demands, it is advantageous to use local storage.

[IMPORTANT]
====
Expand All @@ -34,11 +34,13 @@ In {product-title} 4.1 deployed on bare metal, Prometheus and Alertmanager canno

.Prerequisites

* Dedicate sufficient persistent storage to ensure that the disk does not become full. How much storage you need depends on the number of pods. For information on system requirements for persistent storage, see xref:../../scalability_and_performance/scaling-cluster-monitoring-operator.adoc#prometheus-database-storage-requirements[Prometheus database storage requirements].
* Unless you enable dynamically-provisioned storage, make sure you have a Persistent Volume (PV) ready to be claimed by the Persistent Volume Claim (PVC), one PV for each replica. Since Prometheus has two replicas and Alertmanager has three replicas, you need five PVs to support the entire monitoring stack.
* Dedicate sufficient local persistent storage to ensure that the disk does not become full. How much storage you need depends on the number of pods. For information on system requirements for persistent storage, see xref:../../scalability_and_performance/scaling-cluster-monitoring-operator.adoc#prometheus-database-storage-requirements[Prometheus database storage requirements].
* Make sure you have a Persistent Volume (PV) ready to be claimed by the Persistent Volume Claim (PVC), one PV for each replica. Since Prometheus has two replicas and Alertmanager has three replicas, you need five PVs to support the entire monitoring stack. The Persistent Volumes should be available from the Local Storage Operator. This does not apply if you enable dynamically provisioned storage.
* Use the block type of storage.
// FIXME link
* link:https://osdocs-486\--ocpdocs.netlify.com/openshift-enterprise/latest/storage/persistent-storage/persistent-storage-local.html[Configure local persistent storage.]

include::modules/monitoring-configuring-a-persistent-volume-claim.adoc[leveloffset=+2]
include::modules/monitoring-configuring-a-local-persistent-volume-claim.adoc[leveloffset=+2]
include::modules/monitoring-modifying-retention-time-for-prometheus-metrics-data.adoc[leveloffset=+2]

// .Additional resources
Expand Down