From d54a33d905aac1c2ef169d03dc9cc5207857db64 Mon Sep 17 00:00:00 2001 From: Abby <78209557+abby-cyber@users.noreply.github.com> Date: Thu, 21 Sep 2023 09:48:38 +0800 Subject: [PATCH 1/7] update ssl & adds-on --- .../1.introduction-to-nebula-operator.md | 2 +- .../3.1create-cluster-with-kubectl.md | 218 ++++++------- .../3.2create-cluster-with-helm.md | 6 + .../8.5.enable-ssl.md | 289 +++++++++++++----- mkdocs.yml | 7 +- 5 files changed, 316 insertions(+), 206 deletions(-) diff --git a/docs-2.0/nebula-operator/1.introduction-to-nebula-operator.md b/docs-2.0/nebula-operator/1.introduction-to-nebula-operator.md index 3d6223193b1..75249e2d530 100644 --- a/docs-2.0/nebula-operator/1.introduction-to-nebula-operator.md +++ b/docs-2.0/nebula-operator/1.introduction-to-nebula-operator.md @@ -41,7 +41,7 @@ NebulaGraph Operator does not support the v1.x version of NebulaGraph. NebulaGra | NebulaGraph | NebulaGraph Operator | | ------------- | -------------------- | -| 3.5.x | 1.5.0, 1.6.0 | +| 3.5.x | 1.5.0, 1.6.1 | | 3.0.0 ~ 3.4.1 | 1.3.0, 1.4.0 ~ 1.4.2 | | 3.0.0 ~ 3.3.x | 1.0.0, 1.1.0, 1.2.0 | | 2.5.x ~ 2.6.x | 0.9.0 | diff --git a/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md b/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md index 41e8199ef7b..6bf46523261 100644 --- a/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md +++ b/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md @@ -108,72 +108,6 @@ The following example shows how to create a NebulaGraph cluster by creating a cl === "Cluster with Zones" NebulaGraph Operator supports creating a cluster with [Zones](../../4.deployment-and-installation/5.zone.md). - - You must set the following parameters for creating a cluster with Zones. Other parameters can be changed as needed. For more information on other parameters, see the [sample configuration](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/config/samples/apps_v1alpha1_nebulacluster.yaml). - - | Parameter | Default value | Description | - | :---- | :--- | :--- | - | `spec.metad.licenseManagerURL` | - | Configure the URL that points to the LM, which consists of the access address and port number (default port `9119`) of the LM. For example, `192.168.8.100:9119`. **You must configure this parameter in order to obtain the license information; otherwise, the enterprise edition cluster cannot be used.** | - |`spec..image`|-|The container image of the Graph, Meta, or Storage service of the enterprise edition.| - |`spec.imagePullSecrets`| - |Specifies the Secret for pulling the NebulaGraph Enterprise service images from a private repository.| - |`spec.alpineImage`|`reg.vesoft-inc.com/nebula-alpine:latest`|The Alpine Linux image, used to obtain the Zone information where nodes are located.| - |`spec.metad.config.zone_list`|-|A list of zone names, split by comma. For example: zone1,zone2,zone3.
**Zone names CANNOT be modified once be set.**| - |`spec.graphd.config.prioritize_intra_zone_reading`|`false`|Specifies whether to prioritize sending queries to the storage nodes in the same zone.
When set to `true`, the query is sent to the storage nodes in the same zone. If reading fails in that Zone, it will decide based on `stick_to_intra_zone_on_failure` whether to read the leader partition replica data from other Zones. | - |`spec.graphd.config.stick_to_intra_zone_on_failure`|`false`|Specifies whether to stick to intra-zone routing if unable to find the requested partitions in the same zone. When set to `true`, if unable to find the partition replica in that Zone, it does not read data from other Zones.| - - ???+ note "Learn more about Zones in NebulaGraph Operator" - - **Understanding NebulaGraph's Zone Feature** - - NebulaGraph utilizes a feature called Zones to efficiently manage its distributed architecture. Each Zone represents a logical grouping of Storage pods and Graph pods, responsible for storing the complete graph space data. The data within NebulaGraph's spaces is partitioned, and replicas of these partitions are evenly distributed across all available Zones. The utilization of Zones can significantly reduce inter-Zone network traffic costs and boost data transfer speeds. Moreover, intra-zone-reading allows for increased availability, because replicas of a partition spread out among different zones. - - **Configuring NebulaGraph Zones** - - To make the most of the Zone feature, you first need to determine the actual Zone where your cluster nodes are located. Typically, nodes deployed on cloud platforms are labeled with their respective Zones. Once you have this information, you can configure it in your cluster's configuration file by setting the `spec.metad.config.zone_list` parameter. This parameter should be a list of Zone names, separated by commas, and should match the actual Zone names where your nodes are located. For example, if your nodes are in Zones `az1`, `az2`, and `az3`, your configuration would look like this: - - ```yaml - spec: - metad: - config: - zone_list: az1,az2,az3 - ``` - - **Operator's Use of Zone Information** - - NebulaGraph Operator leverages Kubernetes' [TopoloySpread](https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/) feature to manage the scheduling of Storage and Graph pods. Once the `zone_list` is configured, Storage services are automatically assigned to their respective Zones based on the `topology.kubernetes.io/zone` label. - - For intra-zone data access, the Graph service dynamically assigns itself to a Zone using the `--assigned_zone=$NODE_ZONE` parameter. It identifies the Zone name of the node where the Graph service resides by utilizing an init-container to fetch this information. The Alpine Linux image specified in `spec.alpineImage` (default: `reg.vesoft-inc.com/nebula-alpine:latest`) plays a role in obtaining Zone information. - - **Prioritizing Intra-Zone Data Access** - - By setting `spec.graphd.config.prioritize_intra_zone_reading` to `true` in the cluster configuration file, you enable the Graph service to prioritize sending queries to Storage services within the same Zone. In the event of a read failure within that Zone, the behavior depends on the value of `spec.graphd.config.stick_to_intra_zone_on_failure`. If set to `true`, the Graph service avoids reading data from other Zones and returns an error. Otherwise, it reads data from leader partition replicas in other Zones. - - ```yaml - spec: - alpineImage: reg.vesoft-inc.com/cloud-dev/nebula-alpine:latest - graphd: - config: - prioritize_intra_zone_reading: "true" - stick_to_intra_zone_on_failure: "false" - ``` - - **Zone Mapping for Resilience** - - Once Storage and Graph services are assigned to Zones, the mapping between the pod and its corresponding Zone is stored in a configmap named `-graphd|storaged-zone`. This mapping facilitates pod scheduling during rolling updates and pod restarts, ensuring that services return to their original Zones as needed. - - !!! caution - - DO NOT manually modify the configmaps created by NebulaGraph Operator. Doing so may cause unexpected behavior. - - - Other optional parameters for the enterprise edition are as follows: - - | Parameter | Default value | Description | - | :---- | :--- | :--- | - |`spec.storaged.enableAutoBalance`| `false`| Specifies whether to enable automatic data balancing. For more information, see [Balance storage data after scaling out](../8.custom-cluster-configurations/8.3.balance-data-when-scaling-storage.md).| - |`spec.enableBR`|`false`|Specifies whether to enable the BR tool. For more information, see [Backup and restore](../10.backup-restore-using-operator.md).| - |`spec.graphd.enable_graph_ssl`|`false`| Specifies whether to enable SSL for the Graph service. For more details, see [Enable mTLS](../8.custom-cluster-configurations/8.5.enable-ssl.md). | - ??? info "Expand to view sample cluster configurations" @@ -184,7 +118,11 @@ The following example shows how to create a NebulaGraph cluster by creating a cl name: nebula namespace: default spec: + # Used to obtain the Zone information where nodes are located. alpineImage: "reg.vesoft-inc.com/cloud-dev/nebula-alpine:latest" + # Used for backup and recovery as well as log cleanup functions. + # If you do not customize this configuration, + # the default configuration will be used. agent: image: reg.vesoft-inc.com/cloud-dev/nebula-agent version: v3.6.0-sc @@ -192,82 +130,22 @@ The following example shows how to create a NebulaGraph cluster by creating a cl image: vesoft/nebula-stats-exporter replicas: 1 maxRequests: 20 + # Used to create a console container, + # which is used to connect to the NebulaGraph cluster. console: version: "nightly" graphd: config: + # The following parameters are required for creating a cluster with Zones. accept_partial_success: "true" - ca_client_path: certs/root.crt - ca_path: certs/root.crt - cert_path: certs/server.crt - key_path: certs/server.key - enable_graph_ssl: "true" prioritize_intra_zone_reading: "true" - stick_to_intra_zone_on_failure: "true" + sync_meta_when_use_space: "true" + stick_to_intra_zone_on_failure: "false" + session_reclaim_interval_secs: "300" + # The following parameters are required for collecting logs. logtostderr: "1" redirect_stdout: "false" stderrthreshold: "0" - initContainers: - - name: init-auth-sidecar - imagePullPolicy: IfNotPresent - image: 496756745489.dkr.ecr.us-east-1.amazonaws.com/auth-sidecar:v1.60.0 - env: - - name: AUTH_SIDECAR_CONFIG_FILENAME - value: sidecar-init - volumeMounts: - - name: credentials - mountPath: /credentials - - name: auth-sidecar-config - mountPath: /etc/config - sidecarContainers: - - name: auth-sidecar - image: 496756745489.dkr.ecr.us-east-1.amazonaws.com/auth-sidecar:v1.60.0 - imagePullPolicy: IfNotPresent - resources: - requests: - cpu: 100m - memory: 500Mi - env: - - name: LOCAL_POD_IP - valueFrom: - fieldRef: - fieldPath: status.podIP - - name: LOCAL_POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: LOCAL_POD_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - readinessProbe: - httpGet: - path: /ready - port: 8086 - initialDelaySeconds: 5 - periodSeconds: 10 - successThreshold: 1 - failureThreshold: 3 - livenessProbe: - httpGet: - path: /live - port: 8086 - initialDelaySeconds: 5 - periodSeconds: 10 - successThreshold: 1 - failureThreshold: 3 - volumeMounts: - - name: credentials - mountPath: /credentials - - name: auth-sidecar-config - mountPath: /etc/config - volumes: - - name: credentials - emptyDir: - medium: Memory - volumeMounts: - - name: credentials - mountPath: /usr/local/nebula/certs resources: requests: cpu: "2" @@ -286,6 +164,8 @@ The following example shows how to create a NebulaGraph cluster by creating a cl # Zone names CANNOT be modified once set. # It's suggested to set an odd number of Zones. zone_list: az1,az2,az3 + validate_session_timestamp: "false" + # LM access address and port number. licenseManagerURL: "192.168.8.xxx:9119" resources: requests: @@ -332,11 +212,83 @@ The following example shows how to create a NebulaGraph cluster by creating a cl imagePullPolicy: Always imagePullSecrets: - name: nebula-image + # Evenly distribute storage Pods across Zones. + # Must be set when using Zones. topologySpreadConstraints: - topologyKey: "topology.kubernetes.io/zone" whenUnsatisfiable: "DoNotSchedule" + ``` + + !!! caution + + Make sure storage Pods are evenly distributed across zones before ingesting data by running `SHOW ZONES` in nebula-console. For zone-related commands, see [Zones](../../4.deployment-and-installation/5.zone.md). + + You must set the following parameters for creating a cluster with Zones. Other parameters can be changed as needed. + + | Parameter | Default value | Description | + | :---- | :--- | :--- | + | `spec.metad.licenseManagerURL` | - | Configure the URL that points to the LM, which consists of the access address and port number (default port `9119`) of the LM. For example, `192.168.8.100:9119`. **You must configure this parameter in order to obtain the license information; otherwise, the enterprise edition cluster cannot be used.** | + |`spec..image`|-|The container image of the Graph, Meta, or Storage service of the enterprise edition.| + |`spec.imagePullSecrets`| - |Specifies the Secret for pulling the NebulaGraph Enterprise service images from a private repository.| + |`spec.alpineImage`|`reg.vesoft-inc.com/nebula-alpine:latest`|The Alpine Linux image, used to obtain the Zone information where nodes are located.| + |`spec.metad.config.zone_list`|-|A list of zone names, split by comma. For example: zone1,zone2,zone3.
**Zone names CANNOT be modified once be set.**| + |`spec.graphd.config.prioritize_intra_zone_reading`|`false`|Specifies whether to prioritize sending queries to the storage pods in the same zone.
When set to `true`, the query is sent to the storage pods in the same zone. If reading fails in that Zone, it will decide based on `stick_to_intra_zone_on_failure` whether to read the leader partition replica data from other Zones. | + |`spec.graphd.config.stick_to_intra_zone_on_failure`|`false`|Specifies whether to stick to intra-zone routing if unable to find the requested partitions in the same zone. When set to `true`, if unable to find the partition replica in that Zone, it does not read data from other Zones.| + |`spec.topologySpreadConstraints[0].topologyKey`|``| It is a field in Kubernetes used to control the distribution of storage Pods. Its purpose is to ensure that your storage Pods are evenly spread across Zones.
To use the Zone feature, you must set the value to `topology.kubernetes.io/zone`. Run `kubectl get node --show-labels` to check the key. For more information, see [TopologySpread](https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/#example-multiple-topologyspreadconstraints).| + + ???+ note "Learn more about Zones in NebulaGraph Operator" + + **Understanding NebulaGraph's Zone Feature** + + NebulaGraph utilizes a feature called Zones to efficiently manage its distributed architecture. Each Zone represents a logical grouping of Storage pods and Graph pods, responsible for storing the complete graph space data. The data within NebulaGraph's spaces is partitioned, and replicas of these partitions are evenly distributed across all available Zones. The utilization of Zones can significantly reduce inter-Zone network traffic costs and boost data transfer speeds. Moreover, intra-zone-reading allows for increased availability, because replicas of a partition spread out among different zones. + + **Configuring NebulaGraph Zones** + + To make the most of the Zone feature, you first need to determine the actual Zone where your cluster nodes are located. Typically, nodes deployed on cloud platforms are labeled with their respective Zones. Once you have this information, you can configure it in your cluster's configuration file by setting the `spec.metad.config.zone_list` parameter. This parameter should be a list of Zone names, separated by commas, and should match the actual Zone names where your nodes are located. For example, if your nodes are in Zones `az1`, `az2`, and `az3`, your configuration would look like this: + + ```yaml + spec: + metad: + config: + zone_list: az1,az2,az3 + ``` + + **Operator's Use of Zone Information** + + NebulaGraph Operator leverages Kubernetes' [TopoloySpread](https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/) feature to manage the scheduling of Storage and Graph pods. Once the `zone_list` is configured, Storage services are automatically assigned to their respective Zones based on the `topology.kubernetes.io/zone` label. + + For intra-zone data access, the Graph service dynamically assigns itself to a Zone using the `--assigned_zone=$NODE_ZONE` parameter. It identifies the Zone name of the node where the Graph service resides by utilizing an init-container to fetch this information. The Alpine Linux image specified in `spec.alpineImage` (default: `reg.vesoft-inc.com/nebula-alpine:latest`) plays a role in obtaining Zone information. + + **Prioritizing Intra-Zone Data Access** + + By setting `spec.graphd.config.prioritize_intra_zone_reading` to `true` in the cluster configuration file, you enable the Graph service to prioritize sending queries to Storage services within the same Zone. In the event of a read failure within that Zone, the behavior depends on the value of `spec.graphd.config.stick_to_intra_zone_on_failure`. If set to `true`, the Graph service avoids reading data from other Zones and returns an error. Otherwise, it reads data from leader partition replicas in other Zones. + + ```yaml + spec: + alpineImage: reg.vesoft-inc.com/cloud-dev/nebula-alpine:latest + graphd: + config: + prioritize_intra_zone_reading: "true" + stick_to_intra_zone_on_failure: "false" ``` + **Zone Mapping for Resilience** + + Once Storage and Graph services are assigned to Zones, the mapping between the pod and its corresponding Zone is stored in a configmap named `-graphd|storaged-zone`. This mapping facilitates pod scheduling during rolling updates and pod restarts, ensuring that services return to their original Zones as needed. + + !!! caution + + DO NOT manually modify the configmaps created by NebulaGraph Operator. Doing so may cause unexpected behavior. + + + Other optional parameters for the enterprise edition are as follows: + + | Parameter | Default value | Description | + | :---- | :--- | :--- | + |`spec.storaged.enableAutoBalance`| `false`| Specifies whether to enable automatic data balancing. For more information, see [Balance storage data after scaling out](../8.custom-cluster-configurations/8.3.balance-data-when-scaling-storage.md).| + |`spec.enableBR`|`false`|Specifies whether to enable the BR tool. For more information, see [Backup and restore](../10.backup-restore-using-operator.md).| + |`spec.graphd.enable_graph_ssl`|`false`| Specifies whether to enable SSL for the Graph service. For more details, see [Enable mTLS](../8.custom-cluster-configurations/8.5.enable-ssl.md). | + {{ ent.ent_end }} 1. Create a NebulaGraph cluster. diff --git a/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md b/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md index 31273375d2a..f47056ce657 100644 --- a/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md +++ b/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md @@ -127,9 +127,15 @@ --set nebula.metad.config.zone_list= \ --set nebula.graphd.config.prioritize_intra_zone_reading=true \ --set nebula.graphd.config.stick_to_intra_zone_on_failure=false \ + # Evenly distribute the Pods of the Storage service across Zones. + --set topologySpreadConstraints[0].topologyKey=kubernetes.io/zone --namespace="${NEBULA_CLUSTER_NAMESPACE}" \ ``` + !!! caution + + Make sure storage Pods are evenly distributed across zones before ingesting data by running `SHOW ZONES` in nebula-console. For zone-related commands, see [Zones](../../4.deployment-and-installation/5.zone.md). + {{ent.ent_end}} To view all configuration parameters of the NebulaGraph cluster, run the `helm show values nebula-operator/nebula-cluster` command or click [nebula-cluster/values.yaml](https://github.com/vesoft-inc/nebula-operator/blob/{{operator.branch}}/charts/nebula-cluster/values.yaml). diff --git a/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.5.enable-ssl.md b/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.5.enable-ssl.md index b5f265c9845..6dc11311957 100644 --- a/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.5.enable-ssl.md +++ b/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.5.enable-ssl.md @@ -14,23 +14,20 @@ In the NebulaGraph environment running in Kubernetes, mutual TLS (mTLS) is used In the cluster created using Operator, the client and server use the same CA root certificate by default. -## Encryption policies +## Encryption scenarios -NebulaGraph provides three encryption policies for mTLS: +The following two scenarios are commonly used for encryption: -- Encryption of data transmission between the client and the Graph service. - - This policy only involves encryption between the client and the Graph service and does not encrypt data transmission between other services in the cluster. +- Encrypting communication between the client and the Graph service. -- Encrypt the data transmission between clients, the Graph service, the Meta service, and the Storage service. - - This policy encrypts data transmission between the client, Graph service, Meta service, and Storage service in the cluster. +- Encrypting communication between services, such as communication between the Graph service and the Meta service, communication between the Graph service and the Storage service, and communication between the Meta service and the Storage service. -- Encryption of data transmission related to the Meta service within the cluster. - - This policy only involves encrypting data transmission related to the Meta service within the cluster and does not encrypt data transmission between other services or the client. + !!! note + + - The Graph service in NebulaGraph is the entry point for all client requests. The Graph service communicates with the Meta service and the Storage service to complete the client requests. Therefore, the Graph service needs to be able to communicate with the Meta service and the Storage service. + - The Storage and Meta services in NebulaGraph communicate with each other through heartbeat messages to ensure their availability and health. Therefore, the Storage service needs to be able to communicate with the Meta service and vice versa. -For different encryption policies, you need to configure different fields in the cluster configuration file. For more information, see [Authentication policies](../../7.data-security/4.ssl.md#authentication_policies). +For all encryption scenarios, see [Authentication policies](../../7.data-security/4.ssl.md#authentication_policies). ## mTLS with certificate hot-reloading @@ -38,7 +35,7 @@ NebulaGraph Operator supports enabling mTLS with certificate hot-reloading. The ### Sample configurations -??? info "Expand to view the sample configurations of mTLS" +??? info "View sample configurations of mTLS between the client and the Graph service" ```yaml apiVersion: apps.nebula-graph.io/v1alpha1 @@ -52,18 +49,11 @@ NebulaGraph Operator supports enabling mTLS with certificate hot-reloading. The maxRequests: 20 graphd: config: - accept_partial_success: "true" ca_client_path: certs/root.crt ca_path: certs/root.crt cert_path: certs/server.crt - enable_graph_ssl: "true" - enable_intra_zone_routing: "true" key_path: certs/server.key - logtostderr: "1" - redirect_stdout: "false" - stderrthreshold: "0" - stick_to_intra_zone_on_failure: "true" - timestamp_in_logfile_name: "false" + enable_graph_ssl: "true" initContainers: - name: init-auth-sidecar command: @@ -72,14 +62,14 @@ NebulaGraph Operator supports enabling mTLS with certificate hot-reloading. The args: - cp /certs/* /credentials/ imagePullPolicy: Always - image: reg.vesoft-inc.com/cloud-dev/nebula-certs:latest + image: reg.vesoft-inc.com/xxx/xxx:latest volumeMounts: - name: credentials mountPath: /credentials sidecarContainers: - name: auth-sidecar imagePullPolicy: Always - image: reg.vesoft-inc.com/cloud-dev/nebula-certs:latest + image: reg.vesoft-inc.com/xxx/xxx:latest volumeMounts: - name: credentials mountPath: /credentials @@ -158,13 +148,201 @@ NebulaGraph Operator supports enabling mTLS with certificate hot-reloading. The - name: nebula-image enablePVReclaim: true topologySpreadConstraints: + - topologyKey: "kubernetes.io/zone" + whenUnsatisfiable: "ScheduleAnyway" + ``` + +??? info "View sample configurations of mTLS between services" + + ```yaml + apiVersion: apps.nebula-graph.io/v1alpha1 + kind: NebulaCluster + metadata: + name: nebula + spec: + exporter: + image: vesoft/nebula-stats-exporter + replicas: 1 + maxRequests: 20 + sslCerts: + clientSecret: "client-cert" + caSecret: "ca-cert" # The Secret name of the CA certificate. + caCert: "root.crt" + graphd: + config: + ca_client_path: certs/root.crt + ca_path: certs/root.crt + cert_path: certs/server.crt + key_path: certs/server.key + enable_meta_ssl: "true" + enable_storage_ssl: "true" + initContainers: + - name: init-auth-sidecar + command: + - /bin/sh + - -c + args: + - cp /certs/* /credentials/ + imagePullPolicy: Always + image: reg.vesoft-inc.com/xxx/xxx:latest + volumeMounts: + - name: credentials + mountPath: /credentials + sidecarContainers: + - name: auth-sidecar + imagePullPolicy: Always + image: reg.vesoft-inc.com/xxx/xxx:latest + volumeMounts: + - name: credentials + mountPath: /credentials + volumes: + - name: credentials + emptyDir: + medium: Memory + volumeMounts: + - name: credentials + mountPath: /usr/local/nebula/certs + logVolumeClaim: + resources: + requests: + storage: 1Gi + storageClassName: local-path + resources: + requests: + cpu: "200m" + memory: "500Mi" + limits: + cpu: "1" + memory: "1Gi" + replicas: 1 + image: reg.vesoft-inc.com/rc/nebula-graphd-ent + version: v3.5.0-sc + metad: + config: + ca_client_path: certs/root.crt + ca_path: certs/root.crt + cert_path: certs/server.crt + key_path: certs/server.key + enable_meta_ssl: "true" + enable_storage_ssl: "true" + initContainers: + - name: init-auth-sidecar + command: + - /bin/sh + - -c + args: + - cp /certs/* /credentials/ + imagePullPolicy: Always + image: reg.vesoft-inc.com/xxx/xxx:latest + volumeMounts: + - name: credentials + mountPath: /credentials + sidecarContainers: + - name: auth-sidecar + imagePullPolicy: Always + image: reg.vesoft-inc.com/xxx/xxx:latest + volumeMounts: + - name: credentials + mountPath: /credentials + volumes: + - name: credentials + emptyDir: + medium: Memory + volumeMounts: + - name: credentials + mountPath: /usr/local/nebula/certs + licenseManagerURL: "192.168.8.xx:9119" + resources: + requests: + cpu: "300m" + memory: "500Mi" + limits: + cpu: "1" + memory: "1Gi" + replicas: 1 + image: reg.vesoft-inc.com/rc/nebula-metad-ent + version: v3.5.0-sc + dataVolumeClaim: + resources: + requests: + storage: 2Gi + storageClassName: local-path + logVolumeClaim: + resources: + requests: + storage: 1Gi + storageClassName: local-path + storaged: + config: + ca_client_path: certs/root.crt + ca_path: certs/root.crt + cert_path: certs/server.crt + key_path: certs/server.key + enable_meta_ssl: "true" + enable_storage_ssl: "true" + initContainers: + - name: init-auth-sidecar + command: + - /bin/sh + - -c + args: + - cp /certs/* /credentials/ + imagePullPolicy: Always + image: reg.vesoft-inc.com/xxx/xxx:latest + volumeMounts: + - name: credentials + mountPath: /credentials + sidecarContainers: + - name: auth-sidecar + imagePullPolicy: Always + image: reg.vesoft-inc.com/xxx/xxx:latest + volumeMounts: + - name: credentials + mountPath: /credentials + volumes: + - name: credentials + emptyDir: + medium: Memory + volumeMounts: + - name: credentials + mountPath: /usr/local/nebula/certs + resources: + requests: + cpu: "300m" + memory: "500Mi" + limits: + cpu: "1" + memory: "1Gi" + replicas: 1 + image: reg.vesoft-inc.com/rc/nebula-storaged-ent + version: v3.5.0-sc + dataVolumeClaims: + - resources: + requests: + storage: 2Gi + storageClassName: local-path + logVolumeClaim: + resources: + requests: + storage: 1Gi + storageClassName: local-path + enableAutoBalance: true + reference: + name: statefulsets.apps + version: v1 + schedulerName: default-scheduler + imagePullPolicy: Always + imagePullSecrets: + - name: nebula-image + enablePVReclaim: true + topologySpreadConstraints: - topologyKey: "kubernetes.io/hostname" whenUnsatisfiable: "ScheduleAnyway" ``` ### Configure `spec..config` -To enable mTLS between the client and the Graph service, configure the `spec.graphd.config` field in the cluster configuration file. The paths specified in fields with `*_path` correspond to file paths relative to `/user/local/nebula`. **It's important to avoid using absolute paths to prevent path recognition errors.** +To enable mTLS between the client and the Graph service, add the following fields under the `spec.graphd.config` in the cluster configuration file. The paths specified in fields with `*_path` correspond to file paths relative to `/user/local/nebula`. **It's important to avoid using absolute paths to prevent path recognition errors.** ```yaml spec: @@ -173,15 +351,11 @@ spec: ca_client_path: certs/root.crt ca_path: certs/root.crt cert_path: certs/server.crt - enable_graph_ssl: "true" key_path: certs/server.key + enable_graph_ssl: "true" ``` -For the configurations of the other two authentication policies: - -- To enable mTLS between the client, the Graph service, the Meta service, and the Storage service: - - Configure the `spec.metad.config`, `spec.graphd.config`, and `spec.storaged.config` fields in the cluster configuration file. +To enable mTLS between services (Graph, Meta, and Storage), add the following fields under the `spec.metad.config`, `spec.graphd.config`, and `spec.storaged.config` respectively in the cluster configuration file. ```yaml spec: @@ -190,60 +364,37 @@ For the configurations of the other two authentication policies: ca_client_path: certs/root.crt ca_path: certs/root.crt cert_path: certs/server.crt - enable_ssl: "true" - key_path: certs/server.key - metad: - config: - ca_client_path: certs/root.crt - ca_path: certs/root.crt - cert_path: certs/server.crt - enable_ssl: "true" key_path: certs/server.key - storaged: - config: - ca_client_path: certs/root.crt - ca_path: certs/root.crt - cert_path: certs/server.crt - enable_ssl: "true" - key_path: certs/server.key - ``` - -- To enable mTLS related to the Meta service: - - Configure the `spec.metad.config`, `spec.graphd.config`, and `spec.storaged.config` fields in the cluster configuration file. - - ```yaml - spec: - graph: - config: - ca_client_path: certs/root.crt - ca_path: certs/root.crt - cert_path: certs/server.crt enable_meta_ssl: "true" - key_path: certs/server.key + enable_storage_ssl: "true" metad: config: ca_client_path: certs/root.crt ca_path: certs/root.crt cert_path: certs/server.crt - enable_meta_ssl: "true" key_path: certs/server.key + enable_meta_ssl: "true" + enable_storage_ssl: "true" storaged: config: ca_client_path: certs/root.crt ca_path: certs/root.crt cert_path: certs/server.crt - enable_meta_ssl: "true" key_path: certs/server.key - ``` + enable_meta_ssl: "true" + enable_storage_ssl: "true" + ``` ### Configure `initContainers`, `sidecarContainers`, `volumes`, and `volumeMounts` -`initContainers`, `sidecarContainers`, `volumes`, and `volumeMounts` fields are essential for implementing mTLS certificate online hot-reloading. For the encryption scenario where only the Graph service needs to be encrypted, you need to configure `initContainers`, `sidecarContainers`, `volumes`, and `volumeMounts` under `spec.graph.config`. +`initContainers`, `sidecarContainers`, `volumes`, and `volumeMounts` fields are essential for implementing mTLS certificate online hot-reloading. + +- For the encryption scenario where only the Graph service needs to be encrypted, configure these fields under `spec.graph.config`. +- For the encryption scenario where the Graph service, Meta service, and Storage service need to be encrypted, configure these fields under `spec.graph.config`, `spec.storage.config`, and `spec.meta.config` respectively. #### `initContainers` -The `initContainers` field is utilized to configure an init-container responsible for generating certificate files. Note that the `volumeMounts` field specifies how the `credentials` volume, shared with the NebulaGraph container, is mounted, providing read and write access. +The `initContainers` field is utilized to configure an init-container responsible for generating certificate files. Note that the `volumeMounts` field specifies how a volume specified by `volumes`, shared with the NebulaGraph container, is mounted, providing read and write access. In the following example, `init-auth-sidecar` performs the task of copying files from the `certs` directory within the image to `/credentials`. After this task is completed, the init-container exits. @@ -258,7 +409,7 @@ initContainers: args: - cp /certs/* /credentials/ imagePullPolicy: Always - image: reg.vesoft-inc.com/cloud-dev/nebula-certs:latest + image: reg.vesoft-inc.com/xxx/xxx:latest volumeMounts: - name: credentials mountPath: /credentials @@ -266,7 +417,7 @@ initContainers: #### `sidecarContainers` -The `sidecarContainers` field is dedicated to periodically monitoring the expiration time of certificates and, when they are near expiration, generating new certificates to replace the existing ones. This process ensures seamless online certificate hot-reloading without any service interruptions. The `volumeMounts` field specifies how the `credentials` volume is mounted, and this volume is shared with the NebulaGraph container. +The `sidecarContainers` field is dedicated to periodically monitoring the expiration time of certificates and, when they are near expiration, generating new certificates to replace the existing ones. This process ensures seamless online certificate hot-reloading without any service interruptions. The `volumeMounts` field specifies how a volume is mounted, and this volume is shared with the NebulaGraph container. In the example provided, the `auth-sidecar` container employs the `crond` process, which runs a crontab script every minute. This script checks the certificate's expiration status using the `openssl x509 -noout -enddate` command. @@ -276,7 +427,7 @@ Example: sidecarContainers: - name: auth-sidecar imagePullPolicy: Always - image: reg.vesoft-inc.com/cloud-dev/nebula-certs:latest + image: reg.vesoft-inc.com/xxx/xxx:latest volumeMounts: - name: credentials mountPath: /credentials @@ -309,9 +460,9 @@ volumeMounts: ### Configure `sslCerts` -The `spec.sslCerts` field specifies the encrypted certificates for NebulaGraph Operator and the [nebula-agent](https://github.com/vesoft-inc/nebula-agent) client (if you do not use the default nebula-agent image in Operator). +When you enable mTLS between services, you still needs to set `spec.sslCerts`, because NebulaGraph Operator communicates with the Graph service, Meta service, and Storage service, -For the other two scenarios where the Graph service, Meta service, and Storage service need to be encrypted, and where only the Meta service needs to be encrypted, you not only need to configure `initContainers`, `sidecarContainers`, `volumes`, and `volumeMounts` under `spec.graph.config`, `spec.storage.config`, and `spec.meta.config`, but also configure `spec.sslCerts`. +The `spec.sslCerts` field specifies the encrypted certificates for NebulaGraph Operator and the [nebula-agent](https://github.com/vesoft-inc/nebula-agent) client (if you do not use the default nebula-agent image in Operator). ```yaml spec: diff --git a/mkdocs.yml b/mkdocs.yml index 8cd66dcffd9..f2d21c5e86b 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -252,8 +252,8 @@ extra: branch: release-1.2 tag: v1.2.0 operator: - release: 1.6.0 - tag: v1.6.0 + release: 1.6.1 + tag: v1.6.1 branch: release-1.6 upgrade_from: 3.5.0 upgrade_to: 3.5.x @@ -736,7 +736,8 @@ nav: - Balance storage data after scaling out: nebula-operator/8.custom-cluster-configurations/8.3.balance-data-when-scaling-storage.md - Manage cluster logs: nebula-operator/8.custom-cluster-configurations/8.4.manage-running-logs.md #ent -#ent - Enable mTLS: nebula-operator/8.custom-cluster-configurations/8.5.enable-ssl.md +#ent + - Enable mTLS: nebula-operator/8.custom-cluster-configurations/8.5.enable-ssl.md - Upgrade NebulaGraph clusters: nebula-operator/9.upgrade-nebula-cluster.md - Specify a rolling update strategy: nebula-operator/11.rolling-update-strategy.md #ent From a31ba6b624c75fcee3c34d1154775d4c4cfc8d88 Mon Sep 17 00:00:00 2001 From: Abby <78209557+abby-cyber@users.noreply.github.com> Date: Thu, 21 Sep 2023 09:54:42 +0800 Subject: [PATCH 2/7] Update mkdocs.yml --- mkdocs.yml | 2 ++ 1 file changed, 2 insertions(+) diff --git a/mkdocs.yml b/mkdocs.yml index f2d21c5e86b..f81869c66c8 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -65,6 +65,8 @@ markdown_extensions: # Plugins plugins: + - glightbox: + zoomable: true - search # This is the original mkdocs search plugin. To use algolia search, comment out this plugin. - macros: include_dir: docs-2.0/reuse/ From 9318394e78d6514e49be66258417a99805b5b4cf Mon Sep 17 00:00:00 2001 From: Abby <78209557+abby-cyber@users.noreply.github.com> Date: Thu, 21 Sep 2023 09:56:33 +0800 Subject: [PATCH 3/7] Update requirements.txt --- requirements.txt | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/requirements.txt b/requirements.txt index 5ccd3ff5db4..4d27f6667a5 100644 --- a/requirements.txt +++ b/requirements.txt @@ -14,4 +14,5 @@ mkdocs-exclude mkdocs-redirects mkdocs-minify-plugin Markdown==3.3.7 -pyyaml \ No newline at end of file +pyyaml +mkdocs-glightbox \ No newline at end of file From 0072d466dec2222e830593d4e78a6ca594f2cbc6 Mon Sep 17 00:00:00 2001 From: Abby <78209557+abby-cyber@users.noreply.github.com> Date: Thu, 21 Sep 2023 12:31:21 +0800 Subject: [PATCH 4/7] comment fix --- .../3.1create-cluster-with-kubectl.md | 92 +++++++++---------- .../3.2create-cluster-with-helm.md | 2 +- .../8.5.enable-ssl.md | 62 +++++++++---- 3 files changed, 89 insertions(+), 67 deletions(-) diff --git a/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md b/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md index 6bf46523261..9a99c1055ef 100644 --- a/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md +++ b/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md @@ -49,50 +49,13 @@ The following example shows how to create a NebulaGraph cluster by creating a cl 3. Create a file named `apps_v1alpha1_nebulacluster.yaml`. - - For a NebulaGraph Community cluster - - For the file content, see the [sample configuration](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/config/samples/apps_v1alpha1_nebulacluster.yaml). - - ??? Info "Expand to show sample parameter descriptions" - - | Parameter | Default value | Description | - | :---- | :--- | :--- | - | `metadata.name` | - | The name of the created NebulaGraph cluster. | - |`spec.console`|-| Configuration of the Console service. For details, see [nebula-console](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/doc/user/nebula_console.md#nebula-console).| - | `spec.graphd.replicas` | `1` | The numeric value of replicas of the Graphd service. | - | `spec.graphd.image` | `vesoft/nebula-graphd` | The container image of the Graphd service. | - | `spec.graphd.version` | `{{nebula.tag}}` | The version of the Graphd service. | - | `spec.graphd.service` | - | The Service configurations for the Graphd service. | - | `spec.graphd.logVolumeClaim.storageClassName` | - | The log disk storage configurations for the Graphd service. | - | `spec.metad.replicas` | `1` | The numeric value of replicas of the Metad service. | - | `spec.metad.image` | `vesoft/nebula-metad` | The container image of the Metad service. | - | `spec.metad.version` | `{{nebula.tag}}` | The version of the Metad service. | - | `spec.metad.dataVolumeClaim.storageClassName` | - | The data disk storage configurations for the Metad service. | - | `spec.metad.logVolumeClaim.storageClassName`|- | The log disk storage configurations for the Metad service.| - | `spec.storaged.replicas` | `3` | The numeric value of replicas of the Storaged service. | - | `spec.storaged.image` | `vesoft/nebula-storaged` | The container image of the Storaged service. | - | `spec.storaged.version` | `{{nebula.tag}}` | The version of the Storaged service. | - | `spec.storaged.dataVolumeClaims.resources.requests.storage` | - | Data disk storage size for the Storaged service. You can specify multiple data disks to store data. When multiple disks are specified, the storage path is `/usr/local/nebula/data1`, `/usr/local/nebula/data2`, etc.| - | `spec.storaged.dataVolumeClaims.resources.storageClassName` | - | The data disk storage configurations for Storaged. If not specified, the global storage parameter is applied. | - | `spec.storaged.logVolumeClaim.storageClassName`|- | The log disk storage configurations for the Storaged service.| - | `spec.storaged.enableAutoBalance` | `true` |Whether to balance data automatically. | - |`spec.agent`|`{}`| Configuration of the Agent service. This is used for backup and recovery as well as log cleanup functions. If you do not customize this configuration, the default configuration will be used.| - | `spec.reference.name` | - | The name of the dependent controller. | - | `spec.schedulerName` | - | The scheduler name. | - | `spec.imagePullPolicy` | The image policy to pull the NebulaGraph image. For details, see [Image pull policy](https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy). | The image pull policy in Kubernetes. | - |`spec.logRotate`| - |Log rotation configuration. For more information, see [Manage cluster logs](../8.custom-cluster-configurations/8.4.manage-running-logs.md).| - |`spec.enablePVReclaim`|`false`|Define whether to automatically delete PVCs and release data after deleting the cluster. For more information, see [Reclaim PVs](../8.custom-cluster-configurations/8.2.pv-reclaim.md).| {{ ent.ent_begin }} - - For a NebulaGraph Enterprise cluster + - To create a NebulaGraph Enterprise cluster Contact our sales team to get a complete NebulaGraph Enterprise Edition cluster YAML example. - !!! enterpriseonly - - Make sure that you have access to NebulaGraph Enterprise Edition images before pulling the image. - === "Cluster without Zones" You must set the following parameters in the configuration file for the enterprise edition. Other parameters can be changed as needed. For information on other parameters, see the [sample configuration](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/config/samples/apps_v1alpha1_nebulacluster.yaml). @@ -100,7 +63,7 @@ The following example shows how to create a NebulaGraph cluster by creating a cl | Parameter | Default value | Description | | :---- | :--- | :--- | - | `spec.metad.licenseManagerURL` | - | Configure the URL that points to the LM, which consists of the access address and port number (default port `9119`) of the LM. For example, `192.168.8.100:9119`. **You must configure this parameter in order to obtain the license information; otherwise, the enterprise edition cluster cannot be used.** | + | `spec.metad.licenseManagerURL` | - | Configure the URL that points to the LM, which consists of the access address and port number (default port `9119`) of the LM. For example, `192.168.8.xxx:9119`. **You must configure this parameter in order to obtain the license information; otherwise, the enterprise edition cluster cannot be used.** | |`spec..image`|-|The container image of the Graph, Meta, or Storage service of the enterprise edition.| |`spec.imagePullSecrets`| - |Specifies the Secret for pulling the NebulaGraph Enterprise service images from a private repository.| @@ -109,7 +72,7 @@ The following example shows how to create a NebulaGraph cluster by creating a cl NebulaGraph Operator supports creating a cluster with [Zones](../../4.deployment-and-installation/5.zone.md). - ??? info "Expand to view sample cluster configurations" + ??? info "Expand to view sample configurations of a cluster with Zones" ```yaml apiVersion: apps.nebula-graph.io/v1alpha1 @@ -119,15 +82,15 @@ The following example shows how to create a NebulaGraph cluster by creating a cl namespace: default spec: # Used to obtain the Zone information where nodes are located. - alpineImage: "reg.vesoft-inc.com/cloud-dev/nebula-alpine:latest" + alpineImage: "reg.vesoft-inc.com/xxx/xxx:latest" # Used for backup and recovery as well as log cleanup functions. # If you do not customize this configuration, # the default configuration will be used. agent: - image: reg.vesoft-inc.com/cloud-dev/nebula-agent + image: reg.vesoft-inc.com/xxx/xxx version: v3.6.0-sc exporter: - image: vesoft/nebula-stats-exporter + image: vesoft/xxx replicas: 1 maxRequests: 20 # Used to create a console container, @@ -154,7 +117,7 @@ The following example shows how to create a NebulaGraph cluster by creating a cl cpu: "2" memory: "2Gi" replicas: 1 - image: reg.vesoft-inc.com/rc/nebula-graphd-ent + image: reg.vesoft-inc.com/xxx/xxx version: v3.5.0-sc metad: config: @@ -175,7 +138,7 @@ The following example shows how to create a NebulaGraph cluster by creating a cl cpu: "1" memory: "1Gi" replicas: 3 - image: reg.vesoft-inc.com/rc/nebula-metad-ent + image: reg.vesoft-inc.com/xxx/xxx version: v3.5.0-sc dataVolumeClaim: resources: @@ -195,7 +158,7 @@ The following example shows how to create a NebulaGraph cluster by creating a cl cpu: "2" memory: "2Gi" replicas: 3 - image: reg.vesoft-inc.com/rc/nebula-storaged-ent + image: reg.vesoft-inc.com/xxx/xxx version: v3.5.0-sc dataVolumeClaims: - resources: @@ -227,7 +190,7 @@ The following example shows how to create a NebulaGraph cluster by creating a cl | Parameter | Default value | Description | | :---- | :--- | :--- | - | `spec.metad.licenseManagerURL` | - | Configure the URL that points to the LM, which consists of the access address and port number (default port `9119`) of the LM. For example, `192.168.8.100:9119`. **You must configure this parameter in order to obtain the license information; otherwise, the enterprise edition cluster cannot be used.** | + | `spec.metad.licenseManagerURL` | - | Configure the URL that points to the LM, which consists of the access address and port number (default port `9119`) of the LM. For example, `192.168.8.xxx:9119`. **You must configure this parameter in order to obtain the license information; otherwise, the enterprise edition cluster cannot be used.** | |`spec..image`|-|The container image of the Graph, Meta, or Storage service of the enterprise edition.| |`spec.imagePullSecrets`| - |Specifies the Secret for pulling the NebulaGraph Enterprise service images from a private repository.| |`spec.alpineImage`|`reg.vesoft-inc.com/nebula-alpine:latest`|The Alpine Linux image, used to obtain the Zone information where nodes are located.| @@ -291,6 +254,41 @@ The following example shows how to create a NebulaGraph cluster by creating a cl {{ ent.ent_end }} + - To create a NebulaGraph Community cluster + + See [community cluster configurations](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/config/samples/apps_v1alpha1_nebulacluster.yaml). + + ??? Info "Expand to show parameter descriptions of community clusters" + + | Parameter | Default value | Description | + | :---- | :--- | :--- | + | `metadata.name` | - | The name of the created NebulaGraph cluster. | + |`spec.console`|-| Configuration of the Console service. For details, see [nebula-console](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/doc/user/nebula_console.md#nebula-console).| + | `spec.graphd.replicas` | `1` | The numeric value of replicas of the Graphd service. | + | `spec.graphd.image` | `vesoft/nebula-graphd` | The container image of the Graphd service. | + | `spec.graphd.version` | `{{nebula.tag}}` | The version of the Graphd service. | + | `spec.graphd.service` | - | The Service configurations for the Graphd service. | + | `spec.graphd.logVolumeClaim.storageClassName` | - | The log disk storage configurations for the Graphd service. | + | `spec.metad.replicas` | `1` | The numeric value of replicas of the Metad service. | + | `spec.metad.image` | `vesoft/nebula-metad` | The container image of the Metad service. | + | `spec.metad.version` | `{{nebula.tag}}` | The version of the Metad service. | + | `spec.metad.dataVolumeClaim.storageClassName` | - | The data disk storage configurations for the Metad service. | + | `spec.metad.logVolumeClaim.storageClassName`|- | The log disk storage configurations for the Metad service.| + | `spec.storaged.replicas` | `3` | The numeric value of replicas of the Storaged service. | + | `spec.storaged.image` | `vesoft/nebula-storaged` | The container image of the Storaged service. | + | `spec.storaged.version` | `{{nebula.tag}}` | The version of the Storaged service. | + | `spec.storaged.dataVolumeClaims.resources.requests.storage` | - | Data disk storage size for the Storaged service. You can specify multiple data disks to store data. When multiple disks are specified, the storage path is `/usr/local/nebula/data1`, `/usr/local/nebula/data2`, etc.| + | `spec.storaged.dataVolumeClaims.resources.storageClassName` | - | The data disk storage configurations for Storaged. If not specified, the global storage parameter is applied. | + | `spec.storaged.logVolumeClaim.storageClassName`|- | The log disk storage configurations for the Storaged service.| + | `spec.storaged.enableAutoBalance` | `true` |Whether to balance data automatically. | + |`spec.agent`|`{}`| Configuration of the Agent service. This is used for backup and recovery as well as log cleanup functions. If you do not customize this configuration, the default configuration will be used.| + | `spec.reference.name` | - | The name of the dependent controller. | + | `spec.schedulerName` | - | The scheduler name. | + | `spec.imagePullPolicy` | The image policy to pull the NebulaGraph image. For details, see [Image pull policy](https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy). | The image pull policy in Kubernetes. | + |`spec.logRotate`| - |Log rotation configuration. For more information, see [Manage cluster logs](../8.custom-cluster-configurations/8.4.manage-running-logs.md).| + |`spec.enablePVReclaim`|`false`|Define whether to automatically delete PVCs and release data after deleting the cluster. For more information, see [Reclaim PVs](../8.custom-cluster-configurations/8.2.pv-reclaim.md).| + + 1. Create a NebulaGraph cluster. ```bash diff --git a/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md b/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md index f47056ce657..3962ed3845d 100644 --- a/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md +++ b/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md @@ -128,7 +128,7 @@ --set nebula.graphd.config.prioritize_intra_zone_reading=true \ --set nebula.graphd.config.stick_to_intra_zone_on_failure=false \ # Evenly distribute the Pods of the Storage service across Zones. - --set topologySpreadConstraints[0].topologyKey=kubernetes.io/zone + --set topologySpreadConstraints[0].topologyKey.kubernetes.io/zone \ --namespace="${NEBULA_CLUSTER_NAMESPACE}" \ ``` diff --git a/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.5.enable-ssl.md b/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.5.enable-ssl.md index 6dc11311957..848cc1b55f5 100644 --- a/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.5.enable-ssl.md +++ b/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.5.enable-ssl.md @@ -24,16 +24,16 @@ The following two scenarios are commonly used for encryption: !!! note - - The Graph service in NebulaGraph is the entry point for all client requests. The Graph service communicates with the Meta service and the Storage service to complete the client requests. Therefore, the Graph service needs to be able to communicate with the Meta service and the Storage service. - - The Storage and Meta services in NebulaGraph communicate with each other through heartbeat messages to ensure their availability and health. Therefore, the Storage service needs to be able to communicate with the Meta service and vice versa. + - The Graph service in NebulaGraph is the entry point for all client requests. The Graph service communicates with the Meta service and the Storage service to complete the client requests. Therefore, the Graph service needs to be able to communicate with the Meta service and the Storage service. + - The Storage and Meta services in NebulaGraph communicate with each other through heartbeat messages to ensure their availability and health. Therefore, the Storage service needs to be able to communicate with the Meta service and vice versa. For all encryption scenarios, see [Authentication policies](../../7.data-security/4.ssl.md#authentication_policies). ## mTLS with certificate hot-reloading -NebulaGraph Operator supports enabling mTLS with certificate hot-reloading. The following provides an example of the configuration file to enable mTLS between the client and the Graph service. +NebulaGraph Operator supports enabling mTLS with certificate hot-reloading. -### Sample configurations +The following provides examples of the configuration file to enable mTLS between the client and the Graph service, and between services. ??? info "View sample configurations of mTLS between the client and the Graph service" @@ -44,16 +44,23 @@ NebulaGraph Operator supports enabling mTLS with certificate hot-reloading. The name: nebula spec: exporter: - image: vesoft/nebula-stats-exporter + image: vesoft/xxx replicas: 1 maxRequests: 20 graphd: config: + # The following parameters are used to enable mTLS between the client and the Graph service. ca_client_path: certs/root.crt ca_path: certs/root.crt cert_path: certs/server.crt key_path: certs/server.key enable_graph_ssl: "true" + # The following parameters are required for creating a cluster with Zones. + accept_partial_success: "true" + prioritize_intra_zone_reading: "true" + sync_meta_when_use_space: "true" + stick_to_intra_zone_on_failure: "false" + session_reclaim_interval_secs: "300" initContainers: - name: init-auth-sidecar command: @@ -93,10 +100,14 @@ NebulaGraph Operator supports enabling mTLS with certificate hot-reloading. The cpu: "1" memory: "1Gi" replicas: 1 - image: reg.vesoft-inc.com/rc/nebula-graphd-ent + image: reg.vesoft-inc.com/xxx/xxx version: v3.5.0-sc metad: - licenseManagerURL: "192.168.8.53:9119" + # Zone names CANNOT be modified once set. + # It's suggested to set an odd number of Zones. + zone_list: az1,az2,az3 + validate_session_timestamp: "false" + licenseManagerURL: "192.168.8.xxx:9119" resources: requests: cpu: "300m" @@ -105,7 +116,7 @@ NebulaGraph Operator supports enabling mTLS with certificate hot-reloading. The cpu: "1" memory: "1Gi" replicas: 1 - image: reg.vesoft-inc.com/rc/nebula-metad-ent + image: reg.vesoft-inc.com/xxx/xxx version: v3.5.0-sc dataVolumeClaim: resources: @@ -126,7 +137,7 @@ NebulaGraph Operator supports enabling mTLS with certificate hot-reloading. The cpu: "1" memory: "1Gi" replicas: 1 - image: reg.vesoft-inc.com/rc/nebula-storaged-ent + image: reg.vesoft-inc.com/xxx/xxx version: v3.5.0-sc dataVolumeClaims: - resources: @@ -142,14 +153,14 @@ NebulaGraph Operator supports enabling mTLS with certificate hot-reloading. The reference: name: statefulsets.apps version: v1 - schedulerName: default-scheduler + schedulerName: nebula-scheduler imagePullPolicy: Always imagePullSecrets: - name: nebula-image enablePVReclaim: true topologySpreadConstraints: - topologyKey: "kubernetes.io/zone" - whenUnsatisfiable: "ScheduleAnyway" + whenUnsatisfiable: "DoNotSchedule" ``` ??? info "View sample configurations of mTLS between services" @@ -161,21 +172,28 @@ NebulaGraph Operator supports enabling mTLS with certificate hot-reloading. The name: nebula spec: exporter: - image: vesoft/nebula-stats-exporter + image: vesoft/xxx replicas: 1 maxRequests: 20 sslCerts: clientSecret: "client-cert" - caSecret: "ca-cert" # The Secret name of the CA certificate. + caSecret: "ca-cert" caCert: "root.crt" graphd: config: + # The following parameters are used to enable mTLS between services. ca_client_path: certs/root.crt ca_path: certs/root.crt cert_path: certs/server.crt key_path: certs/server.key enable_meta_ssl: "true" enable_storage_ssl: "true" + # The following parameters are required for creating a cluster with Zones. + accept_partial_success: "true" + prioritize_intra_zone_reading: "true" + sync_meta_when_use_space: "true" + stick_to_intra_zone_on_failure: "false" + session_reclaim_interval_secs: "300" initContainers: - name: init-auth-sidecar command: @@ -215,10 +233,15 @@ NebulaGraph Operator supports enabling mTLS with certificate hot-reloading. The cpu: "1" memory: "1Gi" replicas: 1 - image: reg.vesoft-inc.com/rc/nebula-graphd-ent + image: reg.vesoft-inc.com/xxx/xxx version: v3.5.0-sc metad: config: + # Zone names CANNOT be modified once set. + # It's suggested to set an odd number of Zones. + zone_list: az1,az2,az3 + validate_session_timestamp: "false" + # The following parameters are used to enable mTLS between services. ca_client_path: certs/root.crt ca_path: certs/root.crt cert_path: certs/server.crt @@ -260,7 +283,7 @@ NebulaGraph Operator supports enabling mTLS with certificate hot-reloading. The cpu: "1" memory: "1Gi" replicas: 1 - image: reg.vesoft-inc.com/rc/nebula-metad-ent + image: reg.vesoft-inc.com/xxx/xxx version: v3.5.0-sc dataVolumeClaim: resources: @@ -274,6 +297,7 @@ NebulaGraph Operator supports enabling mTLS with certificate hot-reloading. The storageClassName: local-path storaged: config: + # The following parameters are used to enable mTLS between services. ca_client_path: certs/root.crt ca_path: certs/root.crt cert_path: certs/server.crt @@ -314,7 +338,7 @@ NebulaGraph Operator supports enabling mTLS with certificate hot-reloading. The cpu: "1" memory: "1Gi" replicas: 1 - image: reg.vesoft-inc.com/rc/nebula-storaged-ent + image: reg.vesoft-inc.com/xxx/xxx version: v3.5.0-sc dataVolumeClaims: - resources: @@ -330,14 +354,14 @@ NebulaGraph Operator supports enabling mTLS with certificate hot-reloading. The reference: name: statefulsets.apps version: v1 - schedulerName: default-scheduler + schedulerName: nebula-scheduler imagePullPolicy: Always imagePullSecrets: - name: nebula-image enablePVReclaim: true topologySpreadConstraints: - - topologyKey: "kubernetes.io/hostname" - whenUnsatisfiable: "ScheduleAnyway" + - topologyKey: "kubernetes.io/zone" + whenUnsatisfiable: "DoNotSchedule" ``` ### Configure `spec..config` From 8355f3659a02faf291652d9640925f506fde6ab2 Mon Sep 17 00:00:00 2001 From: Abby <78209557+abby-cyber@users.noreply.github.com> Date: Thu, 21 Sep 2023 15:49:46 +0800 Subject: [PATCH 5/7] comment fix Update mkdocs.yml --- .../1.introduction-to-nebula-operator.md | 2 +- .../3.1create-cluster-with-kubectl.md | 5 ++--- .../3.2create-cluster-with-helm.md | 5 +++-- .../8.custom-cluster-configurations/8.5.enable-ssl.md | 10 +++++++--- mkdocs.yml | 6 ++---- 5 files changed, 15 insertions(+), 13 deletions(-) diff --git a/docs-2.0/nebula-operator/1.introduction-to-nebula-operator.md b/docs-2.0/nebula-operator/1.introduction-to-nebula-operator.md index 75249e2d530..60e4a08998d 100644 --- a/docs-2.0/nebula-operator/1.introduction-to-nebula-operator.md +++ b/docs-2.0/nebula-operator/1.introduction-to-nebula-operator.md @@ -41,7 +41,7 @@ NebulaGraph Operator does not support the v1.x version of NebulaGraph. NebulaGra | NebulaGraph | NebulaGraph Operator | | ------------- | -------------------- | -| 3.5.x | 1.5.0, 1.6.1 | +| 3.5.x | 1.5.0, 1.6.x | | 3.0.0 ~ 3.4.1 | 1.3.0, 1.4.0 ~ 1.4.2 | | 3.0.0 ~ 3.3.x | 1.0.0, 1.1.0, 1.2.0 | | 2.5.x ~ 2.6.x | 0.9.0 | diff --git a/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md b/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md index 9a99c1055ef..c8c5b4b1c57 100644 --- a/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md +++ b/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md @@ -54,8 +54,6 @@ The following example shows how to create a NebulaGraph cluster by creating a cl - To create a NebulaGraph Enterprise cluster - Contact our sales team to get a complete NebulaGraph Enterprise Edition cluster YAML example. - === "Cluster without Zones" You must set the following parameters in the configuration file for the enterprise edition. Other parameters can be changed as needed. For information on other parameters, see the [sample configuration](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/config/samples/apps_v1alpha1_nebulacluster.yaml). @@ -90,7 +88,7 @@ The following example shows how to create a NebulaGraph cluster by creating a cl image: reg.vesoft-inc.com/xxx/xxx version: v3.6.0-sc exporter: - image: vesoft/xxx + image: vesoft/nebula-stats-exporter replicas: 1 maxRequests: 20 # Used to create a console container, @@ -165,6 +163,7 @@ The following example shows how to create a NebulaGraph cluster by creating a cl requests: storage: 2Gi storageClassName: local-path + # Automatically balance storage data after scaling out. enableAutoBalance: true reference: name: statefulsets.apps diff --git a/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md b/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md index 3962ed3845d..bba7198cc10 100644 --- a/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md +++ b/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md @@ -128,7 +128,8 @@ --set nebula.graphd.config.prioritize_intra_zone_reading=true \ --set nebula.graphd.config.stick_to_intra_zone_on_failure=false \ # Evenly distribute the Pods of the Storage service across Zones. - --set topologySpreadConstraints[0].topologyKey.kubernetes.io/zone \ + --set nebula.topologySpreadConstraints[0].topologyKey=topology.kubernetes.io/zone \ + --set nebula.topologySpreadConstraints[0].whenUnsatisfiable=DoNotSchedule \ --namespace="${NEBULA_CLUSTER_NAMESPACE}" \ ``` @@ -145,7 +146,7 @@ Use the `--set` argument to set configuration parameters for the cluster. For example, `--set nebula.storaged.replicas=3` will set the number of replicas for the Storage service in the cluster to 3. -7. Check the status of the NebulaGraph cluster you created. +1. Check the status of the NebulaGraph cluster you created. ```bash kubectl -n "${NEBULA_CLUSTER_NAMESPACE}" get pod -l "app.kubernetes.io/cluster=${NEBULA_CLUSTER_NAME}" diff --git a/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.5.enable-ssl.md b/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.5.enable-ssl.md index 848cc1b55f5..db247654b62 100644 --- a/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.5.enable-ssl.md +++ b/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.5.enable-ssl.md @@ -44,7 +44,7 @@ The following provides examples of the configuration file to enable mTLS between name: nebula spec: exporter: - image: vesoft/xxx + image: vesoft/nebula-stats-exporter replicas: 1 maxRequests: 20 graphd: @@ -172,9 +172,10 @@ The following provides examples of the configuration file to enable mTLS between name: nebula spec: exporter: - image: vesoft/xxx + image: vesoft/nebula-stats-exporter replicas: 1 maxRequests: 20 + # The certificate files for NebulaGraph Operator to access Storage and Meta services. sslCerts: clientSecret: "client-cert" caSecret: "ca-cert" @@ -350,6 +351,7 @@ The following provides examples of the configuration file to enable mTLS between requests: storage: 1Gi storageClassName: local-path + # Automatically balance storage data after scaling out. enableAutoBalance: true reference: name: statefulsets.apps @@ -358,7 +360,9 @@ The following provides examples of the configuration file to enable mTLS between imagePullPolicy: Always imagePullSecrets: - name: nebula-image + # Whether to automatically delete PVCs when deleting a cluster. enablePVReclaim: true + # Used to evenly distribute Pods across Zones. topologySpreadConstraints: - topologyKey: "kubernetes.io/zone" whenUnsatisfiable: "DoNotSchedule" @@ -484,7 +488,7 @@ volumeMounts: ### Configure `sslCerts` -When you enable mTLS between services, you still needs to set `spec.sslCerts`, because NebulaGraph Operator communicates with the Graph service, Meta service, and Storage service, +When you enable mTLS between services, you still needs to set `spec.sslCerts`, because NebulaGraph Operator communicates with the Meta service and Storage service. The `spec.sslCerts` field specifies the encrypted certificates for NebulaGraph Operator and the [nebula-agent](https://github.com/vesoft-inc/nebula-agent) client (if you do not use the default nebula-agent image in Operator). diff --git a/mkdocs.yml b/mkdocs.yml index f81869c66c8..73464c2262e 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -65,8 +65,6 @@ markdown_extensions: # Plugins plugins: - - glightbox: - zoomable: true - search # This is the original mkdocs search plugin. To use algolia search, comment out this plugin. - macros: include_dir: docs-2.0/reuse/ @@ -254,8 +252,8 @@ extra: branch: release-1.2 tag: v1.2.0 operator: - release: 1.6.1 - tag: v1.6.1 + release: 1.6.2 + tag: v1.6.2 branch: release-1.6 upgrade_from: 3.5.0 upgrade_to: 3.5.x From 977d90562879627950e5c8ce3051e76515a0e9ae Mon Sep 17 00:00:00 2001 From: Abby <78209557+abby-cyber@users.noreply.github.com> Date: Thu, 21 Sep 2023 18:37:12 +0800 Subject: [PATCH 6/7] Update 3.1create-cluster-with-kubectl.md --- .../3.1create-cluster-with-kubectl.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md b/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md index c8c5b4b1c57..8b70d2d24b4 100644 --- a/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md +++ b/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md @@ -192,11 +192,11 @@ The following example shows how to create a NebulaGraph cluster by creating a cl | `spec.metad.licenseManagerURL` | - | Configure the URL that points to the LM, which consists of the access address and port number (default port `9119`) of the LM. For example, `192.168.8.xxx:9119`. **You must configure this parameter in order to obtain the license information; otherwise, the enterprise edition cluster cannot be used.** | |`spec..image`|-|The container image of the Graph, Meta, or Storage service of the enterprise edition.| |`spec.imagePullSecrets`| - |Specifies the Secret for pulling the NebulaGraph Enterprise service images from a private repository.| - |`spec.alpineImage`|`reg.vesoft-inc.com/nebula-alpine:latest`|The Alpine Linux image, used to obtain the Zone information where nodes are located.| + |`spec.alpineImage`|-|The Alpine Linux image, used to obtain the Zone information where nodes are located.| |`spec.metad.config.zone_list`|-|A list of zone names, split by comma. For example: zone1,zone2,zone3.
**Zone names CANNOT be modified once be set.**| |`spec.graphd.config.prioritize_intra_zone_reading`|`false`|Specifies whether to prioritize sending queries to the storage pods in the same zone.
When set to `true`, the query is sent to the storage pods in the same zone. If reading fails in that Zone, it will decide based on `stick_to_intra_zone_on_failure` whether to read the leader partition replica data from other Zones. | |`spec.graphd.config.stick_to_intra_zone_on_failure`|`false`|Specifies whether to stick to intra-zone routing if unable to find the requested partitions in the same zone. When set to `true`, if unable to find the partition replica in that Zone, it does not read data from other Zones.| - |`spec.topologySpreadConstraints[0].topologyKey`|``| It is a field in Kubernetes used to control the distribution of storage Pods. Its purpose is to ensure that your storage Pods are evenly spread across Zones.
To use the Zone feature, you must set the value to `topology.kubernetes.io/zone`. Run `kubectl get node --show-labels` to check the key. For more information, see [TopologySpread](https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/#example-multiple-topologyspreadconstraints).| + |`spec.topologySpreadConstraints`|``| It is a field in Kubernetes used to control the distribution of storage Pods. Its purpose is to ensure that your storage Pods are evenly spread across Zones.
**To use the Zone feature, you must set the value of `topologySpreadConstraints[0].topologyKey` to `topology.kubernetes.io/zone` and the value of `topologySpreadConstraints[0].whenUnsatisfiable` to `DoNotSchedule`**. Run `kubectl get node --show-labels` to check the key. For more information, see [TopologySpread](https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/#example-multiple-topologyspreadconstraints).| ???+ note "Learn more about Zones in NebulaGraph Operator" From be380ef14cf891a371e8ab18efb7966d2c730a70 Mon Sep 17 00:00:00 2001 From: Abby <78209557+abby-cyber@users.noreply.github.com> Date: Thu, 21 Sep 2023 18:47:13 +0800 Subject: [PATCH 7/7] Update mkdocs.yml --- mkdocs.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mkdocs.yml b/mkdocs.yml index 73464c2262e..1ee1f8447e4 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -9,7 +9,7 @@ repo_url: 'https://github.com/vesoft-inc/nebula' copyright: Copyright © 2023 NebulaGraph # modify -edit_uri: 'https://github.com/vesoft-inc/nebula-docs/edit/v3.5.0-sc/docs-2.0/' +# edit_uri: 'https://github.com/vesoft-inc/nebula-docs/edit/v3.5.0-sc/docs-2.0/' theme: name: material