Skip to content

Commit ebad078

Browse files
committed
No public description
PiperOrigin-RevId: 659608688
1 parent 0c284ff commit ebad078

File tree

10 files changed

+280
-78
lines changed

10 files changed

+280
-78
lines changed

modules/sap_hana/README.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -49,8 +49,11 @@ https://cloud.google.com/docs/terraform
4949
| data\_disk\_size\_override | Warning, do not use unless instructed or aware of the implications of using this setting. Overrides the default size for the data disk(s), that is based off of the machine\_type. | `number` | `null` | no |
5050
| data\_disk\_throughput\_override | Warning, do not use unless instructed or aware of the implications of using this setting. Directly sets the throughput in MB/s that the data disk(s) will use. Has no effect if not using a disk type that supports it. | `number` | `null` | no |
5151
| data\_disk\_type\_override | Warning, do not use unless instructed or aware of the implications of using this setting. Override the 'default\_disk\_type' for the data disk. | `string` | `""` | no |
52+
| data\_stripe\_size | Optional - default is 256k. Stripe size for data volume striping (if enable\_data\_striping = true). | `string` | `"256k"` | no |
5253
| disk\_type | Optional - The default disk type to use for disk(s) containing log and data volumes. The default is pd-ssd, except for machines that do not support PD, in which case the default is hyperdisk-extreme. Not all disk are supported on all machine types - see https://cloud.google.com/compute/docs/disks/ for details. | `string` | `""` | no |
54+
| enable\_data\_striping | Optional - default is false. Enable LVM striping of data volume across multiple disks. | `bool` | `false` | no |
5355
| enable\_fast\_restart | Optional - The default is true. If set enables HANA Fast Restart. | `bool` | `true` | no |
56+
| enable\_log\_striping | Optional - default is false. Enable LVM striping of log volume across multiple disks. | `bool` | `false` | no |
5457
| hyperdisk\_balanced\_iops\_default | Optional - default is 3000. Number of IOPS that is set for each disk of type Hyperdisk-balanced (except for boot/usrsap/shared disks). | `number` | `3000` | no |
5558
| hyperdisk\_balanced\_throughput\_default | Optional - default is 750. Throughput in MB/s that is set for each disk of type Hyperdisk-balanced (except for boot/usrsap/shared disks). | `number` | `750` | no |
5659
| include\_backup\_disk | Optional - The default is true. If set creates a disk for backups. | `bool` | `true` | no |
@@ -61,9 +64,12 @@ https://cloud.google.com/docs/terraform
6164
| log\_disk\_size\_override | Warning, do not use unless instructed or aware of the implications of using this setting. Overrides the default size for the log disk(s), that is based off of the machine\_type. | `number` | `null` | no |
6265
| log\_disk\_throughput\_override | Warning, do not use unless instructed or aware of the implications of using this setting. Directly sets the throughput in MB/s that the log disk(s) will use. Has no effect if not using a disk type that supports it. | `number` | `null` | no |
6366
| log\_disk\_type\_override | Warning, do not use unless instructed or aware of the implications of using this setting. Override the 'default\_disk\_type' for the log disk. | `string` | `""` | no |
67+
| log\_stripe\_size | Optional - default is 64k. Stripe size for log volume striping (if enable\_log\_striping = true). | `string` | `"64k"` | no |
6468
| machine\_type | Machine type for the instances. | `string` | n/a | yes |
6569
| network\_tags | OPTIONAL - Network tags can be associated to your instance on deployment. This can be used for firewalling or routing purposes. | `list(string)` | `[]` | no |
6670
| nic\_type | Optional - This value determines the type of NIC to use, valid options are GVNIC and VIRTIO\_NET. If choosing GVNIC make sure that it is supported by your OS choice here https://cloud.google.com/compute/docs/images/os-details#networking. | `string` | `""` | no |
71+
| number\_data\_disks | Optional - default is 2. Number of disks to use for data volume striping (if enable\_data\_striping = true). | `number` | `2` | no |
72+
| number\_log\_disks | Optional - default is 2. Number of disks to use for log volume striping (if enable\_log\_striping = true). | `number` | `2` | no |
6773
| post\_deployment\_script | OPTIONAL - gs:// or https:// location of a script to execute on the created VM's post deployment. | `string` | `""` | no |
6874
| primary\_startup\_url | Startup script to be executed when the VM boots, should not be overridden. | `string` | `"curl -s https://storage.googleapis.com/cloudsapdeploy/terraform/latest/terraform/sap_hana/hana_startup.sh | bash -s https://storage.googleapis.com/cloudsapdeploy/terraform/latest/terraform"` | no |
6975
| project\_id | Project id where the instances will be created. | `string` | n/a | yes |

modules/sap_hana/main.tf

Lines changed: 74 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -80,6 +80,8 @@ locals {
8080
default_hyperdisk_extreme = (length(regexall("^x4-", var.machine_type)) > 0)
8181
default_hyperdisk_balanced = (length(regexall("^c4-|^c3-.*-metal", var.machine_type)) > 0)
8282
only_hyperdisks_supported = local.default_hyperdisk_extreme || local.default_hyperdisk_balanced
83+
num_data_disks = var.enable_data_striping ? var.number_data_disks : 1
84+
num_log_disks = var.enable_log_striping ? var.number_log_disks : 1
8385

8486
# Minimum disk sizes are used to ensure throughput. Extreme disks don't need this.
8587
# All 'over provisioned' capacity is to go onto the data disk.
@@ -113,10 +115,11 @@ locals {
113115

114116
unified_pd_size = var.unified_disk_size_override == null ? local.pd_size : var.unified_disk_size_override
115117
unified_worker_pd_size = var.unified_worker_disk_size_override == null ? local.pd_size_worker : var.unified_worker_disk_size_override
116-
data_pd_size = var.data_disk_size_override == null ? local.hana_data_size : var.data_disk_size_override
117-
log_pd_size = var.log_disk_size_override == null ? local.hana_log_size : var.log_disk_size_override
118-
shared_pd_size = var.shared_disk_size_override == null ? local.hana_shared_size : var.shared_disk_size_override
119-
usrsap_pd_size = var.usrsap_disk_size_override == null ? local.hana_usrsap_size : var.usrsap_disk_size_override
118+
# for striping: divide data/log size by number of disks
119+
data_pd_size = var.data_disk_size_override == null ? ceil(local.hana_data_size / local.num_data_disks) : ceil(var.data_disk_size_override / local.num_data_disks)
120+
log_pd_size = var.log_disk_size_override == null ? ceil(local.hana_log_size / local.num_log_disks) : ceil(var.log_disk_size_override / local.num_log_disks)
121+
shared_pd_size = var.shared_disk_size_override == null ? local.hana_shared_size : var.shared_disk_size_override
122+
usrsap_pd_size = var.usrsap_disk_size_override == null ? local.hana_usrsap_size : var.usrsap_disk_size_override
120123

121124
# Disk types
122125
final_data_disk_type = var.data_disk_type_override == "" ? local.final_disk_type : var.data_disk_type_override
@@ -129,12 +132,12 @@ locals {
129132

130133
# Disk IOPS
131134
hdx_iops_map = {
132-
"data" = max(10000, local.data_pd_size * 2)
133-
"log" = max(10000, local.log_pd_size * 2)
135+
"data" = max(10000, local.data_pd_size * 2 * local.num_data_disks)
136+
"log" = max(10000, local.log_pd_size * 2 * local.num_log_disks)
134137
"shared" = null
135138
"usrsap" = null
136-
"unified" = max(10000, local.data_pd_size * 2) + max(10000, local.log_pd_size * 2)
137-
"worker" = max(10000, local.data_pd_size * 2) + max(10000, local.log_pd_size * 2)
139+
"unified" = max(10000, local.data_pd_size * 2 * local.num_data_disks) + max(10000, local.log_pd_size * 2 * local.num_log_disks)
140+
"worker" = max(10000, local.data_pd_size * 2 * local.num_data_disks) + max(10000, local.log_pd_size * 2 * local.num_log_disks)
138141
"backup" = max(10000, 2 * local.backup_size)
139142
}
140143
hdb_iops_map = {
@@ -163,8 +166,13 @@ locals {
163166
"hyperdisk-extreme" = local.hdx_iops_map
164167
}
165168

166-
final_data_iops = var.data_disk_iops_override == null ? local.iops_map[local.final_data_disk_type]["data"] : var.data_disk_iops_override
167-
final_log_iops = var.log_disk_iops_override == null ? local.iops_map[local.final_log_disk_type]["log"] : var.log_disk_iops_override
169+
# for striping: divide data/log IOPS by number of disks
170+
final_data_iops = (var.data_disk_iops_override == null ?
171+
(local.iops_map[local.final_data_disk_type]["data"] == null ? null : ceil(local.iops_map[local.final_data_disk_type]["data"] / local.num_data_disks)
172+
) : ceil(var.data_disk_iops_override / local.num_data_disks))
173+
final_log_iops = (var.log_disk_iops_override == null ?
174+
(local.iops_map[local.final_log_disk_type]["log"] == null ? null : ceil(local.iops_map[local.final_log_disk_type]["log"] / local.num_log_disks)
175+
) : ceil(var.log_disk_iops_override / local.num_log_disks))
168176
final_shared_iops = var.shared_disk_iops_override == null ? local.iops_map[local.final_shared_disk_type]["shared"] : var.shared_disk_iops_override
169177
final_usrsap_iops = var.usrsap_disk_iops_override == null ? local.iops_map[local.final_usrsap_disk_type]["usrsap"] : var.usrsap_disk_iops_override
170178
final_unified_iops = var.unified_disk_iops_override == null ? local.iops_map[local.final_disk_type]["unified"] : var.unified_disk_iops_override
@@ -198,8 +206,13 @@ locals {
198206
"hyperdisk-extreme" = local.null_throughput_map
199207
}
200208

201-
final_data_throughput = var.data_disk_throughput_override == null ? local.throughput_map[local.final_data_disk_type]["data"] : var.data_disk_throughput_override
202-
final_log_throughput = var.log_disk_throughput_override == null ? local.throughput_map[local.final_log_disk_type]["log"] : var.log_disk_throughput_override
209+
# for striping: divide throughput by number of disks
210+
final_data_throughput = (var.data_disk_throughput_override == null ?
211+
(local.throughput_map[local.final_data_disk_type]["data"] == null ? null : ceil(local.throughput_map[local.final_data_disk_type]["data"] / local.num_data_disks)
212+
) : ceil(var.data_disk_throughput_override / local.num_data_disks))
213+
final_log_throughput = (var.log_disk_throughput_override == null ?
214+
(local.throughput_map[local.final_log_disk_type]["log"] == null ? null : ceil(local.throughput_map[local.final_log_disk_type]["log"] / local.num_log_disks)
215+
) : ceil(var.log_disk_throughput_override / local.num_log_disks))
203216
final_shared_throughput = var.shared_disk_throughput_override == null ? local.throughput_map[local.final_shared_disk_type]["shared"] : var.shared_disk_throughput_override
204217
final_usrsap_throughput = var.usrsap_disk_throughput_override == null ? local.throughput_map[local.final_usrsap_disk_type]["usrsap"] : var.usrsap_disk_throughput_override
205218
final_unified_throughput = var.unified_disk_throughput_override == null ? local.throughput_map[local.final_disk_type]["unified"] : var.unified_disk_throughput_override
@@ -254,6 +267,21 @@ data "assert_test" "verify_hyperdisk_usage_for_backup_disk" {
254267
test = local.only_hyperdisks_supported && local.use_backup_disk ? (length(regexall("hyperdisk", local.final_backup_disk_type)) > 0) : true
255268
throw = "The selected a machine type only works with hyperdisks. Set 'backup_disk_type' accordingly, e.g. 'backup_disk_type = hyperdisk-balanced'"
256269
}
270+
# tflint-ignore: terraform_unused_declarations
271+
data "assert_test" "striping_with_split_disk" {
272+
test = ((var.enable_data_striping || var.enable_log_striping) && !var.use_single_shared_data_log_disk) || !(var.enable_data_striping || var.enable_log_striping)
273+
throw = "Striping not supported if log and data are on the same disk(s). To use striping set 'use_single_shared_data_log_disk=false'"
274+
}
275+
# tflint-ignore: terraform_unused_declarations
276+
data "validation_warning" "warn_data_striping" {
277+
condition = var.enable_data_striping
278+
summary = "Data striping is only intended for cases where the machine level limits are higher than the hyperdisk disk level limits. Refer to https://cloud.google.com/compute/docs/disks/hyperdisks#hd-performance-limits"
279+
}
280+
# tflint-ignore: terraform_unused_declarations
281+
data "validation_warning" "warn_log_striping" {
282+
condition = var.enable_log_striping
283+
summary = "Log striping is not a recommended deployment option."
284+
}
257285

258286
################################################################################
259287
# disks
@@ -298,20 +326,25 @@ resource "google_compute_disk" "sap_hana_unified_worker_disks" {
298326
}
299327

300328
# Split data/log/sap disks
329+
# name without striping: 00001, 00002, ...
330+
# name with striping: 00001-01, 00001-02, 00001-03, 00002-01, 00002-02, 00002-03, ...
301331
resource "google_compute_disk" "sap_hana_data_disks" {
302-
count = var.use_single_shared_data_log_disk ? 0 : var.sap_hana_scaleout_nodes + 1
303-
name = format("${var.instance_name}-data%05d", count.index + 1)
332+
count = var.use_single_shared_data_log_disk ? 0 : (var.sap_hana_scaleout_nodes + 1) * local.num_data_disks
333+
name = (var.enable_data_striping ?
334+
format("${var.instance_name}-data%05d-%02d", floor(count.index / local.num_data_disks) + 1, (count.index % local.num_data_disks) + 1) :
335+
format("${var.instance_name}-data%05d", count.index + 1))
304336
type = local.final_data_disk_type
305337
zone = var.zone
306338
size = local.data_pd_size
307339
project = var.project_id
308340
provisioned_iops = local.final_data_iops
309341
provisioned_throughput = local.final_data_throughput
310342
}
311-
312343
resource "google_compute_disk" "sap_hana_log_disks" {
313-
count = var.use_single_shared_data_log_disk ? 0 : var.sap_hana_scaleout_nodes + 1
314-
name = format("${var.instance_name}-log%05d", count.index + 1)
344+
count = var.use_single_shared_data_log_disk ? 0 : (var.sap_hana_scaleout_nodes + 1) * local.num_log_disks
345+
name = (var.enable_log_striping ?
346+
format("${var.instance_name}-log%05d-%02d", floor(count.index / local.num_log_disks) + 1, (count.index % local.num_log_disks) + 1) :
347+
format("${var.instance_name}-log%05d", count.index + 1))
315348
type = local.final_log_disk_type
316349
zone = var.zone
317350
size = local.log_pd_size
@@ -405,19 +438,22 @@ resource "google_compute_instance" "sap_hana_primary_instance" {
405438
source = google_compute_disk.sap_hana_unified_disks[0].self_link
406439
}
407440
}
408-
409441
dynamic "attached_disk" {
410-
for_each = var.use_single_shared_data_log_disk ? [] : [1]
442+
for_each = var.use_single_shared_data_log_disk ? [] : [for i in range(local.num_data_disks) : {
443+
final_disk_index = i
444+
}]
411445
content {
412-
device_name = google_compute_disk.sap_hana_data_disks[0].name
413-
source = google_compute_disk.sap_hana_data_disks[0].self_link
446+
device_name = google_compute_disk.sap_hana_data_disks[attached_disk.value.final_disk_index].name
447+
source = google_compute_disk.sap_hana_data_disks[attached_disk.value.final_disk_index].self_link
414448
}
415449
}
416450
dynamic "attached_disk" {
417-
for_each = var.use_single_shared_data_log_disk ? [] : [1]
451+
for_each = var.use_single_shared_data_log_disk ? [] : [for i in range(local.num_log_disks) : {
452+
final_disk_index = i
453+
}]
418454
content {
419-
device_name = google_compute_disk.sap_hana_log_disks[0].name
420-
source = google_compute_disk.sap_hana_log_disks[0].self_link
455+
device_name = google_compute_disk.sap_hana_log_disks[attached_disk.value.final_disk_index].name
456+
source = google_compute_disk.sap_hana_log_disks[attached_disk.value.final_disk_index].self_link
421457
}
422458
}
423459
dynamic "attached_disk" {
@@ -501,6 +537,8 @@ resource "google_compute_instance" "sap_hana_primary_instance" {
501537
sap_hana_data_disk_type = local.final_data_disk_type
502538
enable_fast_restart = var.enable_fast_restart
503539
native_bm = local.native_bm
540+
data_stripe_size = var.data_stripe_size
541+
log_stripe_size = var.log_stripe_size
504542
template-type = "TERRAFORM"
505543
}
506544

@@ -539,19 +577,22 @@ resource "google_compute_instance" "sap_hana_worker_instances" {
539577
source = google_compute_disk.sap_hana_unified_worker_disks[count.index].self_link
540578
}
541579
}
542-
543580
dynamic "attached_disk" {
544-
for_each = var.use_single_shared_data_log_disk ? [] : [1]
581+
for_each = var.use_single_shared_data_log_disk ? [] : [for i in range(local.num_data_disks) : {
582+
final_disk_index = i + (count.index + 1) * local.num_data_disks
583+
}]
545584
content {
546-
device_name = google_compute_disk.sap_hana_data_disks[count.index + 1].name
547-
source = google_compute_disk.sap_hana_data_disks[count.index + 1].self_link
585+
device_name = google_compute_disk.sap_hana_data_disks[attached_disk.value.final_disk_index].name
586+
source = google_compute_disk.sap_hana_data_disks[attached_disk.value.final_disk_index].self_link
548587
}
549588
}
550589
dynamic "attached_disk" {
551-
for_each = var.use_single_shared_data_log_disk ? [] : [1]
590+
for_each = var.use_single_shared_data_log_disk ? [] : [for i in range(local.num_log_disks) : {
591+
final_disk_index = i + (count.index + 1) * local.num_log_disks
592+
}]
552593
content {
553-
device_name = google_compute_disk.sap_hana_log_disks[count.index + 1].name
554-
source = google_compute_disk.sap_hana_log_disks[count.index + 1].self_link
594+
device_name = google_compute_disk.sap_hana_log_disks[attached_disk.value.final_disk_index].name
595+
source = google_compute_disk.sap_hana_log_disks[attached_disk.value.final_disk_index].self_link
555596
}
556597
}
557598
dynamic "attached_disk" {
@@ -619,6 +660,8 @@ resource "google_compute_instance" "sap_hana_worker_instances" {
619660
sap_hana_shared_disk = false
620661
enable_fast_restart = var.enable_fast_restart
621662
native_bm = local.native_bm
663+
data_stripe_size = var.data_stripe_size
664+
log_stripe_size = var.log_stripe_size
622665
template-type = "TERRAFORM"
623666
}
624667

modules/sap_hana/sap_hana.tf

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -76,4 +76,5 @@ module "sap_hana" {
7676
# include_backup_disk = true_or_false # default is true
7777
# backup_disk_type = "DISK_TYPE" # default is pd-ssd, except for machines that do not support PD, in which case the default is hyperdisk-extreme. Valid types are "pd-ssd", "pd-balanced", "pd-standard", "pd-extreme", "hyperdisk-balanced", "hyperdisk-extreme".
7878
# enable_fast_restart = true_or_false # default is true, whether to enable HANA Fast Restart
79+
# enable_data_striping = true_or_false # default is false. Enable LVM striping of data volume across multiple disks. Data striping is only intended for cases where the machine level limits are higher than the hyperdisk disk level limits. Refer to https://cloud.google.com/compute/docs/disks/hyperdisks#hd-performance-limits
7980
}

modules/sap_hana/variables.tf

Lines changed: 36 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -338,6 +338,12 @@ variable "hyperdisk_balanced_throughput_default" {
338338
default = 750
339339
}
340340

341+
variable "enable_data_striping" {
342+
type = bool
343+
description = "Optional - default is false. Enable LVM striping of data volume across multiple disks."
344+
default = false
345+
}
346+
341347
#
342348
# DO NOT MODIFY unless instructed or aware of the implications of using those settings
343349
#
@@ -497,3 +503,33 @@ variable "can_ip_forward" {
497503
description = "Whether sending and receiving of packets with non-matching source or destination IPs is allowed."
498504
default = true
499505
}
506+
507+
variable "enable_log_striping" {
508+
type = bool
509+
description = "Optional - default is false. Enable LVM striping of log volume across multiple disks."
510+
default = false
511+
}
512+
513+
variable "number_data_disks" {
514+
type = number
515+
description = "Optional - default is 2. Number of disks to use for data volume striping (if enable_data_striping = true)."
516+
default = 2
517+
}
518+
519+
variable "number_log_disks" {
520+
type = number
521+
description = "Optional - default is 2. Number of disks to use for log volume striping (if enable_log_striping = true)."
522+
default = 2
523+
}
524+
525+
variable "data_stripe_size" {
526+
type = string
527+
description = "Optional - default is 256k. Stripe size for data volume striping (if enable_data_striping = true)."
528+
default = "256k"
529+
}
530+
531+
variable "log_stripe_size" {
532+
type = string
533+
description = "Optional - default is 64k. Stripe size for log volume striping (if enable_log_striping = true)."
534+
default = "64k"
535+
}

0 commit comments

Comments
 (0)