Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

aws_s3_bucket_website_configuration is temporarily removed during subsequent apply commands #24883

Closed
ckeyes88 opened this issue May 19, 2022 · 10 comments
Labels
service/s3 Issues and PRs that pertain to the s3 service.

Comments

@ckeyes88
Copy link

Hi there, I'm not sure if this is a bug or my own lack of understanding around the s3 bucket and s3 bucket website configuration resources. I recently set up a terraform deployment to create a static site with an s3 bucket, s3 bucket website configuration, cloudfront distribution, and some dns records.

Everything deploys and appears to work fine initially but I noticed that on subsequent plan and apply runs terraform gives the following notice even though I haven't changed anything outside of terraform:

Note: Objects have changed outside of Terraform

Terraform detected the following changes made outside of Terraform since the
last "terraform apply" which may have affected this plan:

Once it completes the apply the website configuration

Terraform Version

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.27"
    }
  }

  required_version = ">= 0.14.9"
}

terraform {
  backend "s3" {}
}

provider "aws" {
  region = var.region

  // Set up tags that all resources will be tagged with
  default_tags {
    tags = {
      Domain      = var.domain
      Environment = terraform.workspace
      ManagedBy   = "Terraform"
    }
  }
}

Terraform Configuration Files

resource "aws_s3_bucket" "www_bucket" {
  bucket = "www.mybucket.com"
}

resource "aws_s3_bucket_acl" "www_bucket_acl" {
  bucket = aws_s3_bucket.www_bucket.id
  acl    = "public-read"
}

resource "aws_s3_bucket_website_configuration" "www_bucket" {
  bucket = aws_s3_bucket.www_bucket.bucket

  index_document {
    suffix = "index.html"
  }
}

Debug Output

Expected Behavior

I would expect that terraform could detect when a website configuration exists and not replace it which causes a temporary 404 error while the endpoint is being re configured

Actual Behavior

Terraform appears to not that the resource already exists and things it needs to create a new one causing a temporary outage on the website during application

Steps to Reproduce

terraform init
terraform apply

... Once it's done deploying don't change anything

terraform apply
@github-actions github-actions bot added needs-triage Waiting for first response or review from a maintainer. service/s3 Issues and PRs that pertain to the s3 service. labels May 19, 2022
@ckeyes88
Copy link
Author

Cross Linked: hashicorp/terraform#31084

@justinretzolk
Copy link
Member

Hey @ckeyes88 👋 Thank you for taking the time to raise this! So that we have all of the necessary information in order to look into this, can you supply debug logs (redacted as needed) as well?

@justinretzolk justinretzolk added waiting-response Maintainers are waiting on response from community or contributor. and removed needs-triage Waiting for first response or review from a maintainer. labels May 19, 2022
@ckeyes88
Copy link
Author

Hey @justinretzolk Thanks for the response. After a little more testing I've narrowed down what I believe I'm seeing to an issue where every other terraform apply command, removes the aws_s3_bucket_website_configuration resource.

I've provided both the debug logs as well as the std output for four runs (redacted). What happened was I ran an automated deployment from my github action (output not included). Then noticed that the s3 buckets both showed that static website hosted was NOT enabled. I then re triggered the build and afterwards everything was back to normal.

To test out my hypothesis I then ran terraform apply without changing anything in the code from my local machine 4 times and captured the output and std out from each run. Here's what I found

  • Run 1: Static website hosting disabled
  • Run 2: Static website hosting was enabled with the correct settings
  • Run 3: Static website hosting disabled
  • Run 4: Static website hosting was enabled with the correct settings

It seems that one run is removing the enabling and then the next time around it detects that it's not configured and creates it.

Let me know if you need anything else from my end.

terraform_output.zip

@github-actions github-actions bot removed the waiting-response Maintainers are waiting on response from community or contributor. label May 20, 2022
@ckeyes88
Copy link
Author

@justinretzolk my guess after looking at this more without knowing too much about the internals is that the provider is still using the deprecated website attribute on the aws_s3_bucket resource when it runs which marks it as being removed and unsets that setting. Then the next time around it recognizes that the aws_s3_bucket_website_configuration is no longer present and re-creates it.

@justinretzolk
Copy link
Member

Hey @ckeyes88 -- Thank you for that additional information. With that in mind, and after I noticed you're using AWS Provider version 3.x, I believe I know what's happening here.

For the aws_s3_bucket_* resources in the 3.x series, a lifecycle.ignore_changes block is needed on the S3 bucket resource in order to stop the perpetual diff that's causing the website configuration to be removed. This is called out in the provider documentation for 3.75.1

This resource implements the same features that are provided by the website object of the aws_s3_bucket resource. To avoid conflicts or unexpected apply results, a lifecycle configuration is needed on the aws_s3_bucket to ignore changes to the internal website object. Failure to add the lifecycle configuration to the aws_s3_bucket will result in conflicting state results.

# within the aws_s3_bucket resource
lifecycle {
  ignore_changes = [
    website
  ]
}

@justinretzolk justinretzolk added the waiting-response Maintainers are waiting on response from community or contributor. label May 20, 2022
@ckeyes88
Copy link
Author

Ah thanks @justinretzolk I must have missed that. I was hoping there was something like that that would correct this. Just to be clear is 3.x the correct stable version to be using? Is this a bug that will change in the future or is this the expected behavior? Just trying to make sure I properly understand.

@github-actions github-actions bot removed the waiting-response Maintainers are waiting on response from community or contributor. label May 20, 2022
@justinretzolk
Copy link
Member

Hey @ckeyes88 Glad that information helped! Once you've upgraded to the 4.x series (the current release is 4.15.0), you'll no longer need to specify the lifecycle configuration block. Note that there are some caveats to migrating from 3.x to 4.x, but given that you're already using the distinct aws_s3_bucket_* resources, which were the biggest changes, it should be less of an chore to migrate.

@justinretzolk justinretzolk added the waiting-response Maintainers are waiting on response from community or contributor. label May 20, 2022
@ckeyes88
Copy link
Author

Good to know @justinretzolk. Thanks for the heads up. For some reason I thought I was already on the latest version. Glad this was so easy to sort out and really appreciate your quick response and all the work you all do over there to make my dev/ops life easier haha.

Have a great rest of your day!

@github-actions github-actions bot removed the waiting-response Maintainers are waiting on response from community or contributor. label May 20, 2022
@justinretzolk
Copy link
Member

@ckeyes88 I'm glad that got you sorted out, and very much appreciate the kind words! I hope you have a great rest of your day as well!

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Jun 25, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
service/s3 Issues and PRs that pertain to the s3 service.
Projects
None yet
Development

No branches or pull requests

2 participants