Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue with aws_s3_bucket_replication_configuration producing inconsistent final plan #23487

Closed
bfox1793 opened this issue Mar 3, 2022 · 16 comments · Fixed by #23586 or #23703
Closed
Assignees
Labels
bug Addresses a defect in current functionality. service/iam Issues and PRs that pertain to the iam service. service/s3 Issues and PRs that pertain to the s3 service.

Comments

@bfox1793
Copy link

bfox1793 commented Mar 3, 2022

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform CLI and Terraform AWS Provider Version

Terraform v1.1.6
on darwin_arm64

Affected Resource(s)

  • aws_s3_bucket_replication_configuration

Terraform Configuration Files

Please include all Terraform configurations required to reproduce the bug. Bug reports without a functional reproduction may be closed without investigation.

# Copy-paste your Terraform configurations here - for large Terraform configs,
# please use a service like Dropbox and share a link to the ZIP file. For
# security, you can also encrypt the files using our GPG public key: https://keybase.io/hashicorp
locals {
  source_bucket_name      = "unique-source-bucket-name-here"
  destination_bucket_name = "unique-destination-bucket-name-here"

}

resource "aws_iam_role" "s3_bucket_replication" {
  name               = "source-role"
  path               = "/service-role/"
  assume_role_policy = <<POLICY
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "s3.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
POLICY
}

resource "aws_iam_policy" "s3_bucket_replication" {
  name = "source-policy"
  policy = jsonencode({
    Version = "2012-10-17",
    Statement = [
      {
        Action = [
          "s3:ListBucket",
          "s3:GetReplicationConfiguration"
        ]
        Effect = "Allow"
        Resource = [
          "${aws_s3_bucket.source_bucket.arn}"
        ]
      },
      {
        Action = [
          "s3:GetObjectVersionForReplication",
          "s3:GetObjectVersionAcl",
          "s3:GetObjectVersionTagging",
        ]
        Effect = "Allow"
        Resource = [
          "${aws_s3_bucket.source_bucket.arn}/*"
        ]
      },
      {
        Action = [
          "s3:ReplicateObject",
          "s3:ReplicateDelete",
          "s3:ReplicateTags",
          "s3:ObjectOwnerOverrideToBucketOwner"
        ]
        Effect = "Allow"
        Resource = [
          "${aws_s3_bucket.destination.arn}/*"
        ]
      }
    ]
  })
}

resource "aws_iam_role_policy_attachment" "s3_bucket_replication" {
  role       = aws_iam_role.s3_bucket_replication.name
  policy_arn = aws_iam_policy.s3_bucket_replication.arn
}

resource "aws_s3_bucket" "source_bucket" {
  bucket        = local.source_bucket_name
  force_destroy = false
}

resource "aws_s3_bucket_server_side_encryption_configuration" "source_bucket" {
  bucket = aws_s3_bucket.source_bucket.bucket

  rule {
    apply_server_side_encryption_by_default {
      sse_algorithm = "AES256"
    }
  }
}

resource "aws_s3_bucket_versioning" "source_bucket" {
  bucket = aws_s3_bucket.source_bucket.id
  versioning_configuration {
    status = "Enabled"
  }
}

resource "aws_s3_bucket_acl" "example_bucket_acl" {
  bucket = aws_s3_bucket.source_bucket.id
  acl    = "private"
}

resource "aws_s3_bucket_public_access_block" "public_block" {
  bucket                  = aws_s3_bucket.source_bucket.id
  block_public_acls       = true
  block_public_policy     = true
  ignore_public_acls      = true
  restrict_public_buckets = true
}

## Destination S3 bucket
resource "aws_s3_bucket" "destination" {
  bucket = local.destination_bucket_name
}

resource "aws_s3_bucket_versioning" "destination" {
  bucket = aws_s3_bucket.destination.id
  versioning_configuration {
    status = "Enabled"
  }
}

resource "aws_s3_bucket_server_side_encryption_configuration" "destination" {
  bucket = aws_s3_bucket.destination.bucket

  rule {
    apply_server_side_encryption_by_default {
      sse_algorithm = "AES256"
    }
  }
}

## End Destination Bucket

resource "aws_s3_bucket_replication_configuration" "replication" {
  depends_on = [aws_s3_bucket_versioning.source_bucket]
  role       = aws_iam_role.s3_bucket_replication.arn
  bucket     = aws_s3_bucket.source_bucket.id

  rule {
    id     = "replication-rule"
    status = "Enabled"

    filter {
      prefix = ""
    }

    delete_marker_replication {
      status = "Disabled"
    }

    destination {
      bucket = aws_s3_bucket.destination.arn
    }
  }
}

Debug Output

Provided below - can add additional details if necessary

Panic Output

N/A

Expected Behavior

There should be a noop on subsequent terraform plan since none of the replication configs changed.

Actual Behavior

Terraform detects a diff in the replication configuration. If you apply, it spits out the following error:

╷
│ Error: Provider produced inconsistent final plan
│ 
│ When expanding the plan for aws_s3_bucket_replication_configuration.replication to include new values learned
│ so far during apply, provider "registry.terraform.io/hashicorp/aws" produced an invalid new value for .rule:
│ planned set element
│ cty.ObjectVal(map[string]cty.Value{"delete_marker_replication":cty.ListVal([]cty.Value{cty.ObjectVal(map[string]cty.Value{"status":cty.StringVal("Disabled")})}),
│ "destination":cty.ListVal([]cty.Value{cty.ObjectVal(map[string]cty.Value{"access_control_translation":cty.ListValEmpty(cty.Object(map[string]cty.Type{"owner":cty.String})),
│ "account":cty.StringVal(""), "bucket":cty.StringVal("arn:aws:s3:::tb-test-destination-bucket"),
│ "encryption_configuration":cty.ListValEmpty(cty.Object(map[string]cty.Type{"replica_kms_key_id":cty.String})),
│ "metrics":cty.ListValEmpty(cty.Object(map[string]cty.Type{"event_threshold":cty.List(cty.Object(map[string]cty.Type{"minutes":cty.Number})),
│ "status":cty.String})),
│ "replication_time":cty.ListValEmpty(cty.Object(map[string]cty.Type{"status":cty.String,
│ "time":cty.List(cty.Object(map[string]cty.Type{"minutes":cty.Number}))})),
│ "storage_class":cty.StringVal("")})}),
│ "existing_object_replication":cty.ListValEmpty(cty.Object(map[string]cty.Type{"status":cty.String})),
│ "filter":cty.ListVal([]cty.Value{cty.ObjectVal(map[string]cty.Value{"and":cty.ListValEmpty(cty.Object(map[string]cty.Type{"prefix":cty.String,
│ "tags":cty.Map(cty.String)})), "prefix":cty.StringVal(""),
│ "tag":cty.ListValEmpty(cty.Object(map[string]cty.Type{"key":cty.String, "value":cty.String}))})}),
│ "id":cty.StringVal("replication-rule"), "prefix":cty.StringVal(""), "priority":cty.NullVal(cty.Number),
│ "source_selection_criteria":cty.ListValEmpty(cty.Object(map[string]cty.Type{"replica_modifications":cty.List(cty.Object(map[string]cty.Type{"status":cty.String})),
│ "sse_kms_encrypted_objects":cty.List(cty.Object(map[string]cty.Type{"status":cty.String}))})),
│ "status":cty.StringVal("Enabled")}) does not correlate with any element in actual.
│ 
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.
╵
╷
│ Error: Provider produced inconsistent final plan
│ 
│ When expanding the plan for aws_s3_bucket_replication_configuration.replication to include new values learned
│ so far during apply, provider "registry.terraform.io/hashicorp/aws" produced an invalid new value for .rule:
│ planned set element
│ cty.ObjectVal(map[string]cty.Value{"delete_marker_replication":cty.ListValEmpty(cty.Object(map[string]cty.Type{"status":cty.String})),
│ "destination":cty.ListValEmpty(cty.Object(map[string]cty.Type{"access_control_translation":cty.List(cty.Object(map[string]cty.Type{"owner":cty.String})),
│ "account":cty.String, "bucket":cty.String,
│ "encryption_configuration":cty.List(cty.Object(map[string]cty.Type{"replica_kms_key_id":cty.String})),
│ "metrics":cty.List(cty.Object(map[string]cty.Type{"event_threshold":cty.List(cty.Object(map[string]cty.Type{"minutes":cty.Number})),
│ "status":cty.String})), "replication_time":cty.List(cty.Object(map[string]cty.Type{"status":cty.String,
│ "time":cty.List(cty.Object(map[string]cty.Type{"minutes":cty.Number}))})), "storage_class":cty.String})),
│ "existing_object_replication":cty.ListValEmpty(cty.Object(map[string]cty.Type{"status":cty.String})),
│ "filter":cty.ListValEmpty(cty.Object(map[string]cty.Type{"and":cty.List(cty.Object(map[string]cty.Type{"prefix":cty.String,
│ "tags":cty.Map(cty.String)})), "prefix":cty.String,
│ "tag":cty.List(cty.Object(map[string]cty.Type{"key":cty.String, "value":cty.String}))})),
│ "id":cty.NullVal(cty.String), "prefix":cty.NullVal(cty.String), "priority":cty.NullVal(cty.Number),
│ "source_selection_criteria":cty.ListValEmpty(cty.Object(map[string]cty.Type{"replica_modifications":cty.List(cty.Object(map[string]cty.Type{"status":cty.String})),
│ "sse_kms_encrypted_objects":cty.List(cty.Object(map[string]cty.Type{"status":cty.String}))})),
│ "status":cty.NullVal(cty.String)}) does not correlate with any element in actual.
│ 
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.

Steps to Reproduce

  1. Create new project with basic replication configuration as given above
  2. terraform apply
  3. Run terraform apply again. Note the detected diff although no configurations have changed
  4. Apply the changes, note the provider output

Important Factoids

The workaround is to delete the replication rule before running every terraform apply, though this seems superfluous and breaks the paradigm of using terraform for state management.

References

N/A

  • #0000
@github-actions github-actions bot added needs-triage Waiting for first response or review from a maintainer. bug Addresses a defect in current functionality. service/iam Issues and PRs that pertain to the iam service. service/s3 Issues and PRs that pertain to the s3 service. labels Mar 3, 2022
@Linutux42
Copy link

I am facing the same issue and I think i have a bit more details to add.
This issue occurs when there is only 1 rule block in the aws_s3_replication_configuration resource.
I do not have imdempotency issue when i have 2 rule blocks in a single aws_s3_replication_configuration.

@justinretzolk justinretzolk removed the needs-triage Waiting for first response or review from a maintainer. label Mar 4, 2022
@bfox1793
Copy link
Author

bfox1793 commented Mar 7, 2022

I am facing the same issue and I think i have a bit more details to add. This issue occurs when there is only 1 rule block in the aws_s3_replication_configuration resource. I do not have imdempotency issue when i have 2 rule blocks in a single aws_s3_replication_configuration.

That matches what we observed as well. We have multiple other S3 replication rules set on S3 buckets, though others all have at least 2 separate ones. This is the first time we've set-up a bucket with a singular rule.

@anGie44
Copy link
Contributor

anGie44 commented Mar 7, 2022

Hi @bfox1793 , thank you for raising this issue. So the diff I'm seeing when using the provided configuration above is the following. Is this the case on your end as well?


Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # aws_s3_bucket_replication_configuration.replication will be updated in-place
  ~ resource "aws_s3_bucket_replication_configuration" "replication" {
        id     = "unique-source-bucket-name-here"
        # (2 unchanged attributes hidden)

      + rule {
          + id     = "replication-rule"
          + status = "Enabled"

          + delete_marker_replication {
              + status = "Disabled"
            }

          + destination {
              + bucket = "arn:aws:s3:::unique-destination-bucket-name-here"
            }

          + filter {
            }
        }
      - rule {
          - id       = "replication-rule" -> null
          - priority = 0 -> null
          - status   = "Enabled" -> null

          - delete_marker_replication {
              - status = "Disabled" -> null
            }

          - destination {
              - bucket = "arn:aws:s3:::unique-destination-bucket-name-here" -> null
            }

          - filter {
            }
        }
      + rule {
        }
    }

Plan: 0 to add, 1 to change, 0 to destroy.

My initial guess is the empty rule block terraform seems to be incorrectly detecting is related to the inconsistent plan. I'll look into this in more detail.

@bfox1793
Copy link
Author

bfox1793 commented Mar 7, 2022

Hi @bfox1793 , thank you for raising this issue. So the diff I'm seeing when using the provided configuration above is the following. Is this the case on your end as well?


Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # aws_s3_bucket_replication_configuration.replication will be updated in-place
  ~ resource "aws_s3_bucket_replication_configuration" "replication" {
        id     = "unique-source-bucket-name-here"
        # (2 unchanged attributes hidden)

      + rule {
          + id     = "replication-rule"
          + status = "Enabled"

          + delete_marker_replication {
              + status = "Disabled"
            }

          + destination {
              + bucket = "arn:aws:s3:::unique-destination-bucket-name-here"
            }

          + filter {
            }
        }
      - rule {
          - id       = "replication-rule" -> null
          - priority = 0 -> null
          - status   = "Enabled" -> null

          - delete_marker_replication {
              - status = "Disabled" -> null
            }

          - destination {
              - bucket = "arn:aws:s3:::unique-destination-bucket-name-here" -> null
            }

          - filter {
            }
        }
      + rule {
        }
    }

Plan: 0 to add, 1 to change, 0 to destroy.

My initial guess is the empty rule block terraform seems to be incorrectly detecting is related to the inconsistent plan. I'll look into this in more detail.

Yup, that's the case for me as well. For what it's worth - the issue seems to appear similarly on both 3.x and 4.x of the provider.

@anGie44
Copy link
Contributor

anGie44 commented Mar 8, 2022

Hmm this is actually somewhat reminiscent of hashicorp/terraform-plugin-sdk#588, but in this case we have various TypeLists nested within the root TypeSet i.e the rule configuration block(s) and no block is actually being removed..I've tested it out and making rule a TypeList instead of TypeSet will fix this odd behavior but it could possibly incur a breaking change

@anGie44
Copy link
Contributor

anGie44 commented Mar 8, 2022

@bfox1793 I didn't notice this before, but it looks like if you use filter {} instead of filter { prefix = "" }, the plan will no longer show changes that require a subsequent apply. I would update that in the configuration in the meantime to avoid the apply-time error, e.g.

  1. Terraform apply (same config as provided in the issue description)
Terraform will perform the following actions:

  # aws_s3_bucket_replication_configuration.replication will be created
  + resource "aws_s3_bucket_replication_configuration" "replication" {
      + bucket = "unique-source-bucket-name-here"
      + id     = (known after apply)
      + role   = "arn:aws:iam::xxxxxxxxxxx:role/service-role/source-role"

      + rule {
          + id     = "replication-rule"
          + status = "Enabled"

          + delete_marker_replication {
              + status = "Disabled"
            }

          + destination {
              + bucket = "arn:aws:s3:::unique-destination-bucket-name-here"
            }

          + filter {
            }
        }
    }

Plan: 1 to add, 0 to change, 0 to destroy.
aws_s3_bucket_replication_configuration.replication: Creating...
aws_s3_bucket_replication_configuration.replication: Creation complete after 1s [id=unique-source-bucket-name-here]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
  1. Terraform plan
...

  # aws_s3_bucket_replication_configuration.replication has changed
  ~ resource "aws_s3_bucket_replication_configuration" "replication" {
        id     = "unique-source-bucket-name-here"
        # (2 unchanged attributes hidden)

      - rule {
          - id     = "replication-rule" -> null
          - status = "Enabled" -> null

          - delete_marker_replication {
              - status = "Disabled" -> null
            }

          - destination {
              - bucket = "arn:aws:s3:::unique-destination-bucket-name-here" -> null
            }

          - filter {
            }
        }
      + rule {
          + id       = "replication-rule"
          + priority = 0
          + status   = "Enabled"

          + delete_marker_replication {
              + status = "Disabled"
            }

          + destination {
              + bucket = "arn:aws:s3:::unique-destination-bucket-name-here"
            }

          + filter {
            }
        }
    }


Unless you have made equivalent changes to your configuration, or ignored the relevant attributes using ignore_changes, the following plan may include actions to undo or
respond to these changes.

────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

No changes. Your infrastructure matches the configuration.

Your configuration already matches the changes detected above. If you'd like to update the Terraform state to match, create and apply a refresh-only plan:
  terraform apply -refresh-only

@bfox1793
Copy link
Author

bfox1793 commented Mar 8, 2022

@bfox1793 I didn't notice this before, but it looks like if you use filter {} instead of filter { prefix = "" }, the plan will no longer show changes that require a subsequent apply. I would update that in the configuration in the meantime to avoid the apply-time error, e.g.

  1. Terraform apply (same config as provided in the issue description)
Terraform will perform the following actions:

  # aws_s3_bucket_replication_configuration.replication will be created
  + resource "aws_s3_bucket_replication_configuration" "replication" {
      + bucket = "unique-source-bucket-name-here"
      + id     = (known after apply)
      + role   = "arn:aws:iam::xxxxxxxxxxx:role/service-role/source-role"

      + rule {
          + id     = "replication-rule"
          + status = "Enabled"

          + delete_marker_replication {
              + status = "Disabled"
            }

          + destination {
              + bucket = "arn:aws:s3:::unique-destination-bucket-name-here"
            }

          + filter {
            }
        }
    }

Plan: 1 to add, 0 to change, 0 to destroy.
aws_s3_bucket_replication_configuration.replication: Creating...
aws_s3_bucket_replication_configuration.replication: Creation complete after 1s [id=unique-source-bucket-name-here]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
  1. Terraform plan
...

  # aws_s3_bucket_replication_configuration.replication has changed
  ~ resource "aws_s3_bucket_replication_configuration" "replication" {
        id     = "unique-source-bucket-name-here"
        # (2 unchanged attributes hidden)

      - rule {
          - id     = "replication-rule" -> null
          - status = "Enabled" -> null

          - delete_marker_replication {
              - status = "Disabled" -> null
            }

          - destination {
              - bucket = "arn:aws:s3:::unique-destination-bucket-name-here" -> null
            }

          - filter {
            }
        }
      + rule {
          + id       = "replication-rule"
          + priority = 0
          + status   = "Enabled"

          + delete_marker_replication {
              + status = "Disabled"
            }

          + destination {
              + bucket = "arn:aws:s3:::unique-destination-bucket-name-here"
            }

          + filter {
            }
        }
    }


Unless you have made equivalent changes to your configuration, or ignored the relevant attributes using ignore_changes, the following plan may include actions to undo or
respond to these changes.

────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

No changes. Your infrastructure matches the configuration.

Your configuration already matches the changes detected above. If you'd like to update the Terraform state to match, create and apply a refresh-only plan:
  terraform apply -refresh-only

Looks like that worked for me as well @anGie44 ! Still a strange bug, but at least this prevents us from needing to delete / recreate the replication rule whenever we modify something in our project (we isolated it for now to avoid this :) )

@anGie44
Copy link
Contributor

anGie44 commented Mar 8, 2022

Awesome @bfox1793 ! Though, yep not an ideal long-term solution 😅 .

@anGie44
Copy link
Contributor

anGie44 commented Mar 9, 2022

Hi @bfox1793 , thanks again for you input. The linked PR should address this odd behavior as it appears that by not sending the rule.filter.prefix value in the PutBucketReplication API request, as it is declared as empty string in Terraform, the diff gets miscalculated.

@chrobotm
Copy link

chrobotm commented Mar 14, 2022

I had to use the workaround even with provider 4.5

@anGie44
Copy link
Contributor

anGie44 commented Mar 14, 2022

Hi @mbotmcc 👋 apologies you are still running into this bug. Do you mind proving configuration details and any debug logs available to you to further investigate?

@chrobotm
Copy link

TF:

resource "aws_s3_bucket_replication_configuration" "..." {
  depends_on = [aws_s3_bucket_versioning....]

  role   = aws_iam_role.s3_replication.arn
  bucket = aws_s3_bucket.....id

  rule {
    id     = "main"
    status = "Enabled"

    delete_marker_replication {
      status = "Disabled"
    }

    filter {
       prefix = ""
    }

    destination {
      bucket        = aws_s3_bucket.....arn
      storage_class = "STANDARD"

      encryption_configuration {
        replica_kms_key_id = data.aws_kms_alias.....target_key_arn
      }
    }

    source_selection_criteria {
      sse_kms_encrypted_objects {
        status = "Enabled"
      }
    }
  }
}

Error:


Error: Provider produced inconsistent final plan
│ 
│ When expanding the plan for
│ module.....aws_s3_bucket_replication_configuration.logs to include
│ new values learned so far during apply, provider
│ "registry.terraform.io/hashicorp/aws" produced an invalid new value for
│ .rule: planned set element
│ cty.ObjectVal(map[string]cty.Value{"delete_marker_replication":cty.ListVal([]cty.Value{cty.ObjectVal(map[string]cty.Value{"status":cty.StringVal("Disabled")})}),
│ "destination":cty.ListVal([]cty.Value{cty.ObjectVal(map[string]cty.Value{"access_control_translation":cty.ListValEmpty(cty.Object(map[string]cty.Type{"owner":cty.String})),
│ "account":cty.StringVal(""),
│ "bucket":cty.StringVal("arn:aws:s3:::..."),
│ "encryption_configuration":cty.ListVal([]cty.Value{cty.ObjectVal(map[string]cty.Value{"replica_kms_key_id":cty.StringVal("...")})}),
│ "metrics":cty.ListValEmpty(cty.Object(map[string]cty.Type{"event_threshold":cty.List(cty.Object(map[string]cty.Type{"minutes":cty.Number})),
│ "status":cty.String})),
│ "replication_time":cty.ListValEmpty(cty.Object(map[string]cty.Type{"status":cty.String,
│ "time":cty.List(cty.Object(map[string]cty.Type{"minutes":cty.Number}))})),
│ "storage_class":cty.StringVal("STANDARD")})}),
│ "existing_object_replication":cty.ListValEmpty(cty.Object(map[string]cty.Type{"status":cty.String})),
│ "filter":cty.ListVal([]cty.Value{cty.ObjectVal(map[string]cty.Value{"and":cty.ListValEmpty(cty.Object(map[string]cty.Type{"prefix":cty.String,
│ "tags":cty.Map(cty.String)})), "prefix":cty.StringVal(""),
│ "tag":cty.ListValEmpty(cty.Object(map[string]cty.Type{"key":cty.String,
│ "value":cty.String}))})}), "id":cty.StringVal("main"),
│ "prefix":cty.StringVal(""), "priority":cty.NullVal(cty.Number),
│ "source_selection_criteria":cty.ListVal([]cty.Value{cty.ObjectVal(map[string]cty.Value{"replica_modifications":cty.ListValEmpty(cty.Object(map[string]cty.Type{"status":cty.String})),
│ "sse_kms_encrypted_objects":cty.ListVal([]cty.Value{cty.ObjectVal(map[string]cty.Value{"status":cty.StringVal("Enabled")})})})}),
│ "status":cty.StringVal("Enabled")}) does not correlate with any element in
│ actual.
│ 
│ This is a bug in the provider, which should be reported in the provider's
│ own issue tracker.
╵
╷
│ Error: Provider produced inconsistent final plan
│ 
│ When expanding the plan for
│ module.....aws_s3_bucket_replication_configuration.logs to include
│ new values learned so far during apply, provider
│ "registry.terraform.io/hashicorp/aws" produced an invalid new value for
│ .rule: planned set element
│ cty.ObjectVal(map[string]cty.Value{"delete_marker_replication":cty.ListValEmpty(cty.Object(map[string]cty.Type{"status":cty.String})),
│ "destination":cty.ListValEmpty(cty.Object(map[string]cty.Type{"access_control_translation":cty.List(cty.Object(map[string]cty.Type{"owner":cty.String})),
│ "account":cty.String, "bucket":cty.String,
│ "encryption_configuration":cty.List(cty.Object(map[string]cty.Type{"replica_kms_key_id":cty.String})),
│ "metrics":cty.List(cty.Object(map[string]cty.Type{"event_threshold":cty.List(cty.Object(map[string]cty.Type{"minutes":cty.Number})),
│ "status":cty.String})),
│ "replication_time":cty.List(cty.Object(map[string]cty.Type{"status":cty.String,
│ "time":cty.List(cty.Object(map[string]cty.Type{"minutes":cty.Number}))})),
│ "storage_class":cty.String})),
│ "existing_object_replication":cty.ListValEmpty(cty.Object(map[string]cty.Type{"status":cty.String})),
│ "filter":cty.ListValEmpty(cty.Object(map[string]cty.Type{"and":cty.List(cty.Object(map[string]cty.Type{"prefix":cty.String,
│ "tags":cty.Map(cty.String)})), "prefix":cty.String,
│ "tag":cty.List(cty.Object(map[string]cty.Type{"key":cty.String,
│ "value":cty.String}))})), "id":cty.NullVal(cty.String),
│ "prefix":cty.NullVal(cty.String), "priority":cty.NullVal(cty.Number),
│ "source_selection_criteria":cty.ListValEmpty(cty.Object(map[string]cty.Type{"replica_modifications":cty.List(cty.Object(map[string]cty.Type{"status":cty.String})),
│ "sse_kms_encrypted_objects":cty.List(cty.Object(map[string]cty.Type{"status":cty.String}))})),
│ "status":cty.NullVal(cty.String)}) does not correlate with any element in
│ actual.
│ 
│ This is a bug in the provider, which should be reported in the provider's
│ own issue tracker.

@anGie44
Copy link
Contributor

anGie44 commented Mar 14, 2022

Hi @mbotmcc, thank you for providing your config. Can you provide the plan-time diff you are seeing before running the apply that throws the error above? I haven't been able to reproduce locally with the same configuration in us-west-2.

@chrobotm
Copy link

Not sure if this makes a difference but we're applying this to an existing S3 bucket which was previously created with provider v3 and this is running in the eu-west-2 region

~ resource "aws_s3_bucket_replication_configuration" "..." {
        id     = "..."
        # (2 unchanged attributes hidden)
      + rule {
          + id     = "main"
          + status = "Enabled"
          + delete_marker_replication {
              + status = "Disabled"
            }
          + destination {
              + bucket        = "arn:aws:s3:::..."
              + storage_class = "STANDARD"
              + encryption_configuration {
                  + replica_kms_key_id = "..."
                }
            }
          + filter {
            }
          + source_selection_criteria {
              + sse_kms_encrypted_objects {
                  + status = "Enabled"
                }
            }
        }
      - rule {
          - id       = "main" -> null
          - priority = 0 -> null
          - status   = "Enabled" -> null
          - delete_marker_replication {
              - status = "Disabled" -> null
            }
          - destination {
              - bucket        = "arn:aws:s3:::..." -> null
              - storage_class = "STANDARD" -> null
              - encryption_configuration {
                  - replica_kms_key_id = "..." -> null
                }
            }
          - filter {
            }
          - source_selection_criteria {
              - sse_kms_encrypted_objects {
                  - status = "Enabled" -> null
                }
            }
        }
      + rule {
        }
    }

@anGie44
Copy link
Contributor

anGie44 commented Mar 15, 2022

Ahh i see, yep can reproduce the diff you are seeing after upgrading from 4.4 -> 4.5; I didn't try that right away 😅 It wasn't readily apparent when creating the the new resources on v4.5 only.

@github-actions
Copy link

github-actions bot commented May 9, 2022

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators May 9, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Addresses a defect in current functionality. service/iam Issues and PRs that pertain to the iam service. service/s3 Issues and PRs that pertain to the s3 service.
Projects
None yet
5 participants