Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"update" timeout override not respected for adding replication to dynamodb table #24473

Closed
cobbr2 opened this issue Apr 29, 2022 · 3 comments · Fixed by #25659
Closed

"update" timeout override not respected for adding replication to dynamodb table #24473

cobbr2 opened this issue Apr 29, 2022 · 3 comments · Fixed by #25659
Labels
bug Addresses a defect in current functionality. service/dynamodb Issues and PRs that pertain to the dynamodb service. timeouts Pertains to timeout increases.
Milestone

Comments

@cobbr2
Copy link

cobbr2 commented Apr 29, 2022

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform CLI and Terraform AWS Provider Version

Terraform v1.1.9
on darwin_amd64
+ provider registry.terraform.io/hashicorp/aws v4.11.0

Change to be applied

  # module.eligible_persons.aws_dynamodb_table.this[0] will be updated in-place
  ~ resource "aws_dynamodb_table" "this" {
        id               = "eligible.persons"
        name             = "eligible.persons"
      ~ stream_enabled   = false -> true
      + stream_view_type = "NEW_AND_OLD_IMAGES"
      ~ tags             = {
          + "Name" = "eligible.persons"
        }
      ~ tags_all         = {
          + "Management"    = "terraform"
          + "Name"          = "eligible.persons"
          + "SecurityLevel" = "RED"
          + "Service"       = "stone"
        }
        # (6 unchanged attributes hidden)




      + replica {
          + kms_key_arn = (known after apply)
          + region_name = "us-west-2"
        }

      ~ timeouts {
          + create = "10m"
          + delete = "10m"
          + update = "60m"
        }

        # (6 unchanged blocks hidden)
    }

Affected Resource(s)

  • aws_dynamodb_table

Expected Behavior

The operation should have waited 60m for the update

Actual Behavior

The operation timed out in 30m

│ Error: error updating DynamoDB Table (eligible.persons) replica: error updating DynamoDB replicas for table (eligible.persons), while creating: error waiting for DynamoDB Table (eligible.persons) replica (us-west-2) creation: timeout while waiting for state to become 'ACTIVE' (last state: 'CREATING', timeout: 30m0s)
│
│   with module.eligible_persons.aws_dynamodb_table.this[0],
│   on .terraform/modules/eligible_persons/main.tf line 1, in resource "aws_dynamodb_table" "this":
│    1: resource "aws_dynamodb_table" "this" {
│
╵

Steps to Reproduce

  1. terraform apply

Important Factoids

Notice that the overrides are for 10m, 10m, and 60m (all are set by the terrafrom-aws-dynamodb-table module). None of them read 30m which is the timeout apparently used by the provider.

@github-actions github-actions bot added needs-triage Waiting for first response or review from a maintainer. service/dynamodb Issues and PRs that pertain to the dynamodb service. labels Apr 29, 2022
@justinretzolk justinretzolk added bug Addresses a defect in current functionality. timeouts Pertains to timeout increases. and removed needs-triage Waiting for first response or review from a maintainer. labels May 2, 2022
@github-actions github-actions bot added this to the v4.22.0 milestone Jul 6, 2022
@YakDriver
Copy link
Member

NOTE to future self and travelers, there is potential for #25659 to introduce a regression relative to this issue.

If practitioners were relying on erroneously short timeouts, we may need to adjust so that custom timeout are not always respected:

Previously, if there were 5 parts in an update and each was set for a hardcoded, unchangeable but short (30 s. - 2 min.) timeout, you could never reach the overall custom update timeout (default 1 hr).

Now, #25659 uses the custom (or default) update timeout for each part within the overall update, with the overall update limited to the same timeout. This is a relatively common pattern in the provider. In other words, for the 5 parts, each can take up to 1 hr, but all 5 must complete in 1 hr. The benefits of this approach are that you don't need to individually figure out timeouts for each part (e.g., part 1 can take 20 min, part 2 5 min, etc.), and the overall timeout for update is respected. The downside is that some part may be troublesome and you want it to fail quickly.

@github-actions
Copy link

github-actions bot commented Jul 8, 2022

This functionality has been released in v4.22.0 of the Terraform AWS Provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.

For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template. Thank you!

@github-actions
Copy link

github-actions bot commented Aug 7, 2022

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Aug 7, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Addresses a defect in current functionality. service/dynamodb Issues and PRs that pertain to the dynamodb service. timeouts Pertains to timeout increases.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants