Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Discussion around changes made to the aws_s3_bucket resource in AWS Provider version 4.0 #23106

Closed
joe-a-t opened this issue Feb 10, 2022 · 93 comments · Fixed by #23985
Closed
Labels
enhancement Requests to existing resources that expand the functionality or scope. service/s3 Issues and PRs that pertain to the s3 service.

Comments

@joe-a-t
Copy link

joe-a-t commented Feb 10, 2022

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Update from HashiCorp, 2022-02-25

Hi everyone,

In light of how much traction this issue has gained, and the amount of information contained within, @joe-a-t has graciously allowed us to make edits to the description of this GitHub Issue to better reflect its current state, and so that we may pin it to make it more visible. We would like to sincerely thank him for the opportunity to do so. For the sake of transparency, and to not lose any context, the original issue description and title will be retained beneath this update.

Breakdown

On 2022-02-10, the AWS Provider team at HashiCorp released a new major version of the AWS Provider for Terraform (v4.0.0), which included breaking changes to the aws_s3_bucket resource. The update introduced several independent aws_s3_bucket_* resources to manipulate the configuration of S3 buckets, where these configuration options previously were arguments to the aws_s3_bucket resource itself. This issue spawned out of that change, has received quite a bit of valuable feedback, and is being used as a centralized meeting point for ongoing discussions around these changes.

Information around these changes may be found:

Action Items

Given the amount of feedback that we received after the release, the AWS Provider team has been following this issue (as well as any other avenues of feedback we can find) and taking steps to try to alleviate some of the burden placed on operators with this release. The team continues to investigate additional avenues, and will continue to update this issue with additional information where possible. These efforts include:

Completed

Currently Investigating

  1. Tooling to help with configuration migration

    We are currently investigating potential tooling options to help operators migrate from legacy aws_s3_bucket resources to the new aws_s3_bucket_* resources. We have identified potential existing tooling and have reached out to the owner of the tool to try to help coordinate efforts to make the tool robust enough to share publicly. This is still in the early phases, so the information we can share on it is relatively limited. Despite this, we feel it's appropriate to share that we are looking into it, given the large amount of community interest.

  2. Marking deprecated arguments as optional in AWS Provider v4.x

    This change aims at splitting the difference between the behavior in v3.x and v4.x by marking deprecated arguments as optional. The goal here is to allow operators more time to transition before the deprecated arguments are fully removed in v5.x. Optional arguments would display deprecation warnings, but would not prevent Terraform from running, as was the behavior with the initial v4.x releases. Documentation will be written in order to provide as much clarity as possible around how to cope with these changes within configurations.

  3. Backporting aws_s3_bucket_* resources to AWS Provider v3.x

    This change aims to bring the new aws_s3_bucket_* resources into the v3.x series of releases in order to give operators more time to plan and execute migration to the new resources without needing to make the jump to v4.x. This would allow operators to use aws_s3_bucket resources alongside the new aws_s3_bucket_* resources, configuring buckets and updating configurations to the new standards as they see fit, without worry of deprecation notices or failed Terraform operations

    Given that backporting resources in this way falls outside of our normal processes, we would like to be explicit around expectations. Bugs in distinct aws_s3_bucket_* resources will be backported to ensure functionality, but additional features will not be backported. Documentation will be updated as well to ensure that this expectation is clearly communicated.

Original issue information

Title: Change the S3 bucket refactor to be a new resource instead of modifying the existing one

Description

Do not change the existing aws_s3_bucket resource in v4 and instead provide a new aws_s3_minimal_bucket or similar resource that people can use if they want your changes to split out various settings into stand alone resources.

My company has literally thousands of AWS buckets configured through Terraform from hundreds of different root modules. Having to run separate terraform import commands for all of the settings you split out into separate resources per your upgrade instructions in https://registry.terraform.io/providers/hashicorp/aws/latest/docs/guides/version-4-upgrade#s3-bucket-refactor is quite frankly an insane demand to make for users.

Right now, our only options appear to be:

  1. Pin to v3 permanently and miss out on all future enhancements/bugfixes/features that the AWS provider publishes.
  2. Spend countless hours running (or making our own tool to run) thousands of terraform import commands in hundreds of Terraform directories.

Please reconsider this change immediately or at least provide tooling to ease adoption.

New or Affected Resource(s)

  • aws_s3_bucket

Potential Terraform Configuration

References

  • #0000

Update 2/22/2022

FYI, I (@joe-a-t) chatted with HashiCorp separately and gave them permission to update the issue (including the title and this comment) if they would like the issue to more accurately reflect the work that they are planning on doing.

@joe-a-t joe-a-t added the enhancement Requests to existing resources that expand the functionality or scope. label Feb 10, 2022
@github-actions github-actions bot added needs-triage Waiting for first response or review from a maintainer. service/s3 Issues and PRs that pertain to the s3 service. labels Feb 10, 2022
@mtesch-um
Copy link

Agree emphatically, this change seems very poorly thought out from the Hashicorp & terraform customer perspective - literally everyone who uses s3 buckets, which is literally everyone who uses AWS, just had their terraform configurations break, out from under their feet.

Without any period of deprecation warning.

boo. Please change it back?

Perhaps if it really needs to go, it could be deprecated for the duration of 4.x.x and removed in 5.x.x?

@FernandoMiguel
Copy link
Contributor

Consider using moved resource instead of import

@jwyoungpm
Copy link

jwyoungpm commented Feb 10, 2022

@FernandoMiguel pardon my ignorance but would moved actually work for that?

It seems like it's for a 1:1 move of a single resource, this change is going to involve going from a single resource to multiple resources to represent that same s3 bucket.

It'd be nice if there were some guidelines for migrating to the new resource pattern. If we have to use import for this it's going to be a huge headache.

@jwyoungpm
Copy link

jwyoungpm commented Feb 10, 2022

@YakDriver I was actually afraid that was the answer here. The problem for us we have over 100 buckets, which will end up being a ton of re-writing and importing. The way our org works, we ask everyone to apply things via terraform in ci/cd for the most part, meaning many can't import.

I recognize and agree that we shouldn't expect this project to cater to how our org works, just providing that as an example for how big of a problem this will be for many. I would also expect many in largish orgs wouldn't necessarily be able to import.

EDIT: for the record, if moved would work for this use case, it'd be much less of a problem. Maybe a feature I need to request.

@joe-a-t
Copy link
Author

joe-a-t commented Feb 10, 2022

Hey @YakDriver, the issue is not related to being unable to upgrade a single target per those instructions. The issue is the scale of how many thousands times we would need to follow those instructions in literally hundreds of directories.

@adam-bartlett-sp
Copy link

The change is asking folks to make massive changes to their Terraform resources with absolutely no notice. I can see that being needed on some pieces within the Terraform AWS provider, however the s3 bucket resource is one of the most commonly used resources and this change requires substantial rework and migration plans. It is not something that many groups can just say "okay we're doing this overnight", even in hyper-agile groups.

The provider should have been set to "depreciate" over the 4.x.y release cycle the legacy resource layout to allow organizations time to evaluate and design a plan to move forward. Instead, a substantial amount of Terraform users will version lock to 3.74.1 and never upgrade because the s3 code is going to require an outlandish amount of work with no backwards compatibility (so you can't go back to 3.x.y) nor no guarantee the provider won't be restructured again sans notice.

Hashicorp's own marketing statements differ from the reality here. The first line of this post is true: this is one of the oldest and most used pieces of the provider. It however, is completely not depreciating old resources, it completely obliterates the support with no notice and with limited discussion. https://www.hashicorp.com/blog/terraform-aws-provider-4-0-refactors-s3-bucket-resource

@ghost
Copy link

ghost commented Feb 11, 2022

Sure it may be understood that this change may have been exposed by way of GH Issues/PRs/discussion, but I would guess a large portion of the Terraform user base doesn't spend a tonne of time in GH -- and interestingly, there was no notice at runtime about an incoming breaking change (perhaps a different issue altogether).

That said, Just to add on to what @adam-bartlett-sp mentioned regarding the 4.0 blog post, it specifically says this on the official blog post:

aws_s3_bucket will remain as is until the next major release (5.0) of the Terraform AWS provider, giving practitioners the time and opportunity to refactor their configurations to introduce the new resources in place.

/shrug

@justinretzolk
Copy link
Member

justinretzolk commented Feb 11, 2022

Hey y'all,

Thank you for taking the time to raise this issue, and for everyone who has reacted to it so far, or linked to it from other issues. It's important for us to hear this kind of feedback, and the cross-linking that several folks have done really helps to collect all of this information in one place. There are some great points being made here, and we wanted to take time to address each of them and to provide more information around some of the decisions that were made in regards to the S3 bucket refactor. We recognize that this is a major change to a very popular resource, and acknowledge that a change this large was bound to cause headaches for some. Ultimately, we hope that the information provided here will help give a better idea of why we felt it was necessary to do so.

A bit of background


The aws_s3_bucket resource is one of the oldest and most used resources within the AWS provider. Since it's inception, the resource has grown significantly in functionality, and therefore complexity. Under the hood, this means that a single configuration is handled by a myriad of separate API calls. This not only adds to the difficulty of maintaining the resource by way of it being a large amount of code, but also means that if a bug is inadvertently introduced anywhere along the line, the entire resource is affected.

This monolithic approach has been noted as potentially problematic by maintainers and practitioners alike for quite some time, with a number of issues being raised going back to at least 2017 (#4418, #1704, #6188, #749; all of which were referenced in our planning for this change). In addition to these types of requests, we've seen numerous bugs over time that were either a direct result of or exacerbated by the former structure of the resource; for example #17791.

The idea of separating the resource into several distinct/modularized resources also provides the opportunity to more quickly adapt to changes and features introduced to S3, with the added benefit of easier testing to help ensure that bugs are fewer less disruptive.

With this information in mind, we knew that changes were needed in order to ensure the long-term viability and resilience of managing the lifecycle of S3 buckets within the provider, and to address the desires of the community at large.

Paths we considered


It's important that we note that the path that we landed on for the 4.0 release was not the only option that we considered. In the interest of transparency, we'd like to discuss some of the other options that we considered, and why we ultimately decided against them.

Alternative Option 1: Remove a subset of S3 bucket resource arguments and create new S3 bucket related resources

We considered the more aggressive approach of removing a subset of the root-level arguments and introducing additional S3 bucket related resources to replace them. This would have forced practitioners to refactor their configurations immediately before proceeding. Due to the drastic impact this change would have, it was ultimately decided that this was not the right approach.

Alternative Option 2: Move existing S3 bucket resource to a new resource and refactor S3 bucket resource

This was an approach that was mentioned here already as a suggested alternate approach, and was something that we considered while planning for this major release. This option would imply that the existing schema would be maintained, but moved to a new resource in the provider, e.g. aws_s3_bucket_classic, while the aws_s3_bucket resource was moved to the new refactor with the separate resources that were ultimately introduced in 4.0. This would create two separate resources for managing S3 buckets in different manners, with future support/backporting for the aws_s3_bucket_classic resource ending once the major version was released.

Going this route would mean that practitioners would have needed to either adopt the new aws_s3_bucket and associated resources standards (as is needed with the path we ultimately chose) or go the route of moving all existing instances of aws_s3_bucket to aws_s3_bucket_classic in the state. The latter of which would involve the use of terraform state rm and terraform import.

Adopted Option: Deprecate subset of S3 Bucket resource arguments, marking them as Computed (read-only) until the next major milestone, and create new S3 bucket related resources

The approach we decided was the least impactful of the options given the downsides of the approaches outlined above, and after discussing the options with a number of practitioners, including large enterprise customers with significant Terraform adoption. Much like alternative option 2, and as has been noted here, this does still introduce the need to import resources and make adjustments to configurations long-term, but was intended to allow as much functionality to be retained as was possible while still introducing the necessary refactor. This approach also allowed us an opportunity that option 2 did not: the ability to leave the "deprecated" attributes in place until version 5.0.0, allowing for a longer deprecation period.

With the above in mind, we want to take the opportunity to address the use of "deprecate" here. We noted in the pinned issue detailing the changes:

for the purpose of this release, this equates to marking previously configurable arguments as read-only/computed and with a Deprecated message

This was not reflected in the blog post about the release (something we're working to address), and we recognize that this doesn't necessarily reflect what "deprecated" means in the software world. The thought here is that this would not break configurations, but rather that there would be no drift detection for computed attributes.

We will update the blog post to reflect the language in our pinned issue and clear up any inconsistencies.

Migration Tooling


Another consideration that we took into account whilst working on this, that has been mentioned here as well, is the idea of tooling to help aid the transition between the versions. In speaking with users/practitioners while trying to determine the "correct" course of action, the general feedback we received was that the thought of using some type of script to modify large state files, particularly those stored in Terraform Enterprise or Terraform Cloud, was an undesirable path. Manipulating state outside of Terraform has inherent risks associated with it, including loss of tracked resources, which may lead to loss of money, time, or even compromises to the security of infrastructure. In addition, migration is a two part problem; the state and the configuration. This multi-part problem would have necessitated a significantly more complex set of tooling, leading to even greater possibilities for errors. Due to the significant risk associated with it, this ultimately led to the decision to not include migration tooling.

Given the feedback we’ve received so far from the community after the release, we’re beginning to discuss whether or not this is a position that we should reconsider. It’s too early in those discussions to say what the outcome will be, but we feel it’s important to note that we are not shutting down the idea based solely on our preconceived notions. We are planning to spend some time next week investigating new approaches to migration tooling and will follow up with the results of our investigation.

Moving forward


We recognize that this was a big change and there are, understandably, questions about moving forward. The first thing we need to address is the request to roll these changes back. We feel strongly that despite the disruptive nature of a refactor as large as this, it was a necessary change given the positives outlined above. While this is the first time that we've made such a substantial change to such a pivotal resource, there are other resources that have similar levels of bloat (aws_db_instance comes to mind) that may need similar refactors in the future. We intend to take everything we learn from this release in order to help inform how we may make those changes in the future, so I'd like to reiterate that this feedback is extremely important to us.

In the immediate future, if this change is disruptive, we would strongly recommend that you take the opportunity to pin the provider version to a version prior to 4.0.0 until you're able to take the actions necessary to make the transition. This will allow for functionality to remain as is while you take the time to do any refactoring and importing that may be necessary with this major change.

Lessons for us


We don't believe it would be fair to place all of the burden of the feelings represented here on those making the comments, so it's important for us to openly reflect on ways we could have done better by the community. We'd like to take the opportunity to do so here, and also to open the floor for any additional suggestions that anyone may have. It's never our goal to disrupt anyone's workflow, no matter how important the change is.

Advanced Notification

Perhaps the most clear feedback we've seen so far is around the communication of these breaking changes. There were a number of things that we did before and just after the release to try to communicate some of these things; for example, we laid out plans for this release in an issue that has been pinned in the repository since August 2021, released a blog post detailing the changes, posted on Discuss to try to circulate those details, released an upgrade guide to try to help with the transition, and updated several Learn guides (Host a Static Website with S3 and Cloudflare and Build and Use a Local Module) with updated configurations.

Given the feedback that we've received so far, it seems that there may be opportunities to enhance these efforts. Because there is not a mailing list for the provider, if there is a place outside of GitHub or Discuss that the community would like to receive notifications at, we'd love to hear about it. So far, we've begun discussing a few potential options that we'd be interested in receiving feedback on. These potential options include posting to Reddit, hosting pre-release community office hours with our Developer Advocates, posting a notice on the documentation in the Terraform Registry, or perhaps even a pre-release blog post much like we did when we deprecated support for 0.11 in providers.

@jwyoungpm
Copy link

@justinretzolk first of all, thank you for the summary and feedback on the design decisions.

The actual decision to decompose the aws_s3_bucket resource, IMO, is totally sound and I can see why it's desirable and also why you choose to do it how you did. Just speaking from my personal experience, I think 2 things would have really helped me and my team:

  1. For whatever reason, I just happened to miss all of those communications - part of that is absolutely on me but given it has happened to others, I would LOVE to have either a mailing list or even better, would love to see deprecation messages in terraform itself. Can't speak for everyone (I know many ignore them) but my team usually does pay attention to those as ignoring them means a lot more work for us later.

  2. As mentioned in this comment the actual verbiage announcing the release doesn't appear correct - I discovered this issue because someone at my company open pinned the provider and got errors RE s3 despite not actually changing their resource. If that is actually a bug, I am happy to open a bug report.

@FernandoMiguel
Copy link
Contributor

Given the feedback that we've received so far, it seems that there may be opportunities to enhance these efforts. Because there is not a mailing list for the provider, if there is a place outside of GitHub or Discuss that the community would like to receive notifications at, we'd love to hear about it. So far, we've begun discussing a few potential options that we'd be interested in receiving feedback on. These potential options include posting to Reddit, hosting pre-release community office hours with our Developer Advocates, posting a notice on the documents in the provider registry, or perhaps even a pre-release blog post much like we did when we deprecated support for 0.11 in providers.

I have a proposal.
Release binaries that are pre release of the final version as a smoke out release and see how advanced practiciomers handle the changes and initial feedback.
Would avoid everyone taking a hit all of the sudden.

@mtesch-um
Copy link

We don't believe it would be fair to place all of the burden of the feelings represented here on those making the comments

The concern is probably less about the abstract burden of feelings than it is about the very concrete sudden breaking changes.

@joe-a-t 's suggestion of a new resource aws_s3_minimal_bucket for the new functionality instead of a change in the interface of aws_s3_bucket would have caused less pain to your customers (as well as the community - but we happen to pay you so I'm claiming customer status!)

@maryelizbeth
Copy link
Contributor

maryelizbeth commented Feb 11, 2022

@jwyoungpm

  1. As mentioned in this comment the actual verbiage announcing the release doesn't appear correct - I discovered this issue because someone at my company open pinned the provider and got errors RE s3 despite not actually changing their resource. If that is actually a bug, I am happy to open a bug report.

Thanks for surfacing this again. We're working on updates to the upgrade guide and announcement blog post. That part of the post will be updated to reflect how the resource behavior has changed (configurable -> read only/computed) and how that can impact configurations. I'll follow up here with a link to the new content, when it has been updated.

@ghost
Copy link

ghost commented Feb 11, 2022

First and foremost, thanks for the detailed update @justinretzolk -- it certainly helps us all understand the situation better.

I am curious if there may be a better way to address the migration path/refactor recommendations from the provider side. In our particular situation, we're looking at several thousand terraform import operations and who knows how many code changes to support this. I'm not exaggerating when I say that without going down the road of automating the migration procedure (perhaps some complex code-related script + automated terraform import calls?), we're looking at countless hours to upgrade all of the modules that handle the S3 bucket configurations across the organization. For larger enterprises who rely on TF to manage the S3 bucket configuration, they're very likely in the same boat.

While I understand there may be some added complexity to this soltion, but would it not have been better for the vast number of Terraform users to support both s3 bucket resource configurations and introduce their dedicated resource types alongside them, effectively allowing graceful migration during the v4 provider version like the blog post advised? One particular example is the s3 bucket replication configuration resource -- previously configured at the bucket resource level, there was the addition of a dedicated resource (eg. to support 2-way replication).

More importantly, perhaps something was missed regarding the use of the "read-only" attribute? As reported in several other GH issues, but the TF plan/apply errors out when the configuration values themselves haven't changed (actual vs. state) in contrary to the idea of it being a "read-only" attribute. Should it not just throw up a read-only/deprecation warning rather than error out altogether? Perhaps this is something worth looking into as an emergency breaking "bug" rather than just telling people "pin to v3.x and begin refactoring work before switching to v4.x"?

@LukaszRacon
Copy link

@justinretzolk reiterating what @tr-mdaize said.

Best option in current situation would be to release 4.x version of aws_s3_bucket resource that accepts legacy attributes as read-only. When there are no value changes provider should issue a deprecation warning. In case of value change provider should issue an error and force user to use new s3 resources. Obviously this is not a ideal but it gives a way for users to use 4.x without initial major rewrite.

Bigger issue are all the third party modules that use s3 resource. From consumer side you either have to fork modules or wait for new version of all modules that you use to support 4.x. Obviously none of thirds party modules were ready for this change. From module developer point of view one might have to maintain two versions of module (to support users on 4.x and 3.x).

@theherk
Copy link
Contributor

theherk commented Feb 14, 2022

It seems to me that the imports are not needed, but maybe it is only in some cases. I have been able to update many to the new configuration without any imports; it just adds the new resources. Maybe I'm causing issues, but I haven't lost any data, and things seem to be okay.

@n1ngu
Copy link

n1ngu commented Feb 14, 2022

It seems to me that the imports are not needed, but maybe it is only in some cases. I have been able to update many to the new configuration without any imports; it just adds the new resources. Maybe I'm causing issues, but I haven't lost any data, and things seem to be okay.

@theherk I was about to comment exactly this. I feel a bit reckless disobeying the recommendation from the migration guide. Quoting: It is then recommended running terraform import on each new resource to prevent data loss. But, what are the basis for this? Naive terraform applies are working just fine both for S3 bucket ACLs and ECS cluster capacity providers! 🤷‍♀️

@jwyoungpm
Copy link

jwyoungpm commented Feb 14, 2022

@theherk did your plans note any changes when you did this?

I'd assume a lot of drop then applies which might be fine for something's but not others.

I'm also surprised that the apply didn't fail because the configuration item created by the resource should have already existed. If not, that actually might be a different bug - apply should fail when you try to create things that exist but aren't tracked in state.

@theherk
Copy link
Contributor

theherk commented Feb 14, 2022

@theherk did your plans note any changes when you did this?

Yes, it noted that the new resources were added. In my case: acl, versioning, encryption, and cors rules in varying combinations. But it didn't disable those to make it happen. It just created the resources in the state. I didn't notice any issues as all. I wish there were some specificity to what drives the recommendation. There must be some condition that triggers data loss, or I'm just fearful about nothing.

@jwyoungpm
Copy link

I could see data loss if it is wants to recreate the bucket or turn off and back on versioning - can't remember for sure but it might delete all current old file versions of you turn it off.

Agree the scenarios for data loss would be nice to know because if a lot of things don't require import, this becomes much easier to deal with.

@theherk
Copy link
Contributor

theherk commented Feb 14, 2022

I could see data loss if it is wants to recreate the bucket or turn off and back on versioning - can't remember for sure but it might delete all current old file versions of you turn it off.

I don't think you can disable versioning, but you could suspend it. Even then though, it isn't a data loss scenario, because no objects are removed. I'm pretty sure unless the bucket goes away configuration changes should not trigger any data loss. Maybe these recommendations are just risk averse avoidance of some state conditions that could happen, but probably don't.

I figure, if I just read the plan and verify, it should be fine. 🤞 ymmv

@justinretzolk
Copy link
Member

While @maryelizbeth works to update the blog post with additional information around expected behavior, I wanted to come back and touch on the rationale behind not introducing a new resource (for the sake of continuity with the previous comments, we'll call it aws_s3_minimal_bucket) and leaving the existing aws_s3_bucket resource in place.

Going this route would mean that we would have two resources - aws_s3_bucket, which would then be in a "planned to be deprecated in the next major release" status, and a new aws_s3_minimal_bucket resource, which would have been in the "this is the resource you'll use once the other is fully deprecated" state. The issue here is that it creates a new fracture that would need to be settled up in the future after aws_s3_bucket was deprecated, namely that:

  • Anyone using aws_s3_bucket would need to configure and import all of the same resources that are affected by the current 4.0.0 release, but at a later time.
  • Anyone who adopted the new aws_s3_minimal_bucket during the deprecation period would need to do a terraform state rm and terraform import on any resources using the aws_s3_minimal_bucket naming scheme once the cutover to aws_s3_bucket happened.

With that in mind, we would find ourselves in a similar position but later in time, and with the added complexity of an additional resource that may or may not have been adopted by many.

This leaves the question around why the new resource would need to be renamed. In the past few months, the provider went through another major overhaul, albeit this one a bit more transparent to everyday users. This change involved aligning the repository with service-level packages that line up with the AWS Go SDK (additional details may be found in #20431 and #20000). Part of this refactoring effort involved standardizing on naming for resources, data source, and functions. Creating a new resource of aws_s3_minimal_bucket would fall outside of this naming scheme, and while we don't want to make things harder for practitioners based solely on naming patterns, it is important for the long-term maintainability of the project to avoid special one-offs wherever possible.

I hope this provides a bit more clarity as to why we decided against that route. Additional information around the current expected behavior of the resource will be contained in the aforementioned blog post update. If there are any additional lingering questions or feedback in the meantime, please continue to comment and react to this issue and the comments within.

@joe-a-t
Copy link
Author

joe-a-t commented Feb 14, 2022

@justinretzolk What about having aws_s3_bucket remain an option long term but with the understanding that it will not get any new features or improvements and should not be used for new bucket configurations. Then you can focus all future development efforts on the aws_s3_minimal_bucket resource (and associated configuration resources) but don't break everyone's Terraform configurations.

If you really wanted to stick to the naming convention for resources moving forwards, you could also look at having an upgrade tool that switches all current usages of aws_s3_bucket to be aws_s3_legacy_bucket or something so that the new resource could get the aws_s3_bucket name instead of needing a separate name like aws_s3_minimal_bucket.

I totally get the desire to avoid confusion for project maintainers by doing a one-off, but a small bit of additional work for the provider maintainers has to be balanced against the thousands of hours of toil the change is creating for thousands of users across the community.

@joe-a-t
Copy link
Author

joe-a-t commented Apr 26, 2022

For reference, hashicorp/terraform#3116 is the Terraform issue for allowing parameterized lifecycle rules

@dan-petty
Copy link
Contributor

Wow, managing an s3 bucket resource is a complex task, so instead of handling the complexity within the provider, the complexities of the configuration are pushed off on everyone who has to work with the resource. BRILLIANT!!!

@gauravdhmj
Copy link

I have would like to convert below terraform script to v4.0.0

resource "aws_s3_bucket" "bucket" {
bucket = local.bucket_name
dynamic "logging" {
for_each = local.logging
content {
target_bucket = logging.value["target_bucket"]
target_prefix = logging.value["target_prefix"]
}
}
}

@minamijoyo
Copy link
Contributor

Hi all,

I’m an author of some third-party tools for Terraform and currently working on a refactoring tool for Terraform.
https://github.com/minamijoyo/tfedit

The initial goal of this project is to provide a way for bulk refactoring of the aws_s3_bucket resource required by breaking changes in AWS provider v4. Even though the tfedit still has many limitations, it’s clearly better than nothing and ready to use. If you are still struggling with the AWS v4 upgrade, give it a try. For details, see README.

@oxfordmale
Copy link

@justinretzolk The way this release was handled raises serious concerns regarding the lack of Engineering best practices in the AWS Provider team at HashiCorp.. This was a major change, however, seem to have been really poorly tested given the trivial bugs that were reported in 3.x (see one of these bugs below). We basically had to upgrade to provider 4.x directly as 3.x is basically unusable with regard to S3 buckets.

There was no immediate urgency behind this change and therefore this release should have been properly regression tested at least covering 90% of the basic S3 functionality. Some of the trivial bug reports raised serious questions what level of regression testing the AWS Provider team at Hashicorp is performing. The real question is if this change would have been needed if Hashicorp actually did some basic regression testing.

#23106 (comment)

@orf
Copy link
Contributor

orf commented Aug 26, 2022

I don’t think that’s entirely fair. The change was poorly thought through, this is true, but when the community brought this up they reacted well and backported the changes to allow a migration path to version 4. It worked flawlessly for us on several hundred buckets.

I guess there is an issue with the back port, but the lifecycle rules fix is simple enough and once you’re on 4.x the issue is no longer there.

Splitting the resources out like this is definitely cleaner and has made our modules a lot more readable, with a lot less “dynamic” blocks to emulate the separate resources.

@oxfordmale
Copy link

oxfordmale commented Aug 26, 2022

The requirement for a lifecycle rule is nothing more than a quite serious regression bug. It prevents changes to server_side_encryption_configuration, not an unreasonable requirement. It raises genuine concerns that Hashicorp hasn't thought this change through properly and isn't doing sufficient level of regression testing to capture such mistakes.

Being more cynical I am not sure they responded to the community, rather than to their paying customers who must have been rightly furious that backwards compatibility was broken. As a free community user, I can only leave constructive feedback here on Github. As a paying customer I would have summoned my Account Manager for a serious talking to.

#23106 (comment)

@tmccombs
Copy link
Contributor

Regarding importing:

There is, however, a downside to this approach in that the diff will show each of the new resources as needing to be created. This potentially large diff will make it more difficult to read the changes that will actually occur,

This is a pretty big downside. Consider the case where I am just refactoring the resources to use the new separate resources, and want to verify that this doesn't result in any changes. I now have to either manually review every one of these new resources (and I may have hundreds of buckets that need to be reviewed), or import all of the new resources (which must be done one at a time, because there isn't a bulk import feature ). Both of these options are pretty tedious.

@minamijoyo
Copy link
Contributor

Hi @tmccombs, good point!

The tfedit (a.k.a. the unofficial AWS provider v4 upgrade tool) supports not only rewriting aws_s3_bucket resources in your Terraform configuration but also generating a migration file for importing new separated aws_s3_bucket_xxx resources in the tfmigrate HCL format. This allows you to verify the upgrade results safely without updating the remote state.

https://github.com/minamijoyo/tfedit
https://github.com/minamijoyo/tfmigrate

@AlexeySalmin
Copy link

AlexeySalmin commented Nov 15, 2022

In the end it's not really clear if you have to do a refactor or not, and if yes then what is the recommended way to do so.

The doc says that the S3 Bucket Refactor "only applies to v4.0.0 through v4.8.0", but the warning is still there as of v4.39.0.

@m1mohamad
Copy link

We have drift CI jobs to check terraform drift as below :
terraform plan -no-color -out=${CI_JOB_NAME}_plan -lock=false
Been receiving the following issue after adding s3 encryption to the existing bucket ( as this setting recently started showing as a drift)

Error: no schema available for aws_s3_bucket_server_side_encryption_configuration.static_sites["www.xyz.com"] while reading state; this is a bug in Terraform and should be reported
Error: no schema available for aws_s3_bucket_server_side_encryption_configuration.gitlab-runner-artifacts while reading state; this is a bug in Terraform and should be reported
Error: no schema available for aws_s3_bucket_server_side_encryption_configuration.static_sites["www.abc.com"] while reading state; this is a bug in Terraform and should be reported

I upgraded provider from 3.56 to 3.76.1 then 4.20.0 on prod and stage , only prod fails to run the job plan as above.
I was unable to find why its failing , all other directories ( atlantis projects) are running fine with similar providers or higher.

@justinretzolk
Copy link
Member

Hi everyone 👋 We’d like to thank everyone again for your feedback in regards to the refactoring of the aws_s3_bucket resource with version 4 of the provider. We recognize that there were things that could have been done to make the upgrade process smoother for everyone; particularly those of you who utilize S3-related resources heavily. We’ve captured a great deal of valuable information from this thread that we intend to keep in mind for future releases.

With that said, given the time that has passed, and with the upcoming release of version 5 of the provider, we feel that it’s time to unpin and close this issue. We welcome feedback on the announcement post, and are looking forward to some bringing some exciting, helpful changes to the provider.

@justinretzolk justinretzolk unpinned this issue Mar 15, 2023
@joe-a-t
Copy link
Author

joe-a-t commented Mar 15, 2023

Hey @justinretzolk, I didn't see anything related to S3 bucket changes in #29842. Does that mean that S3 bucket resources will be unchanged from the current behavior that v4 has in the upcoming v5 release? Or are you intending to do the depreciation/changes that had previously been envisioned for the v3 to v4 upgrade as part of the v4 to v5 upgrade and the announcement just doesn't have those details included in the draft yet?

@justinretzolk
Copy link
Member

Hey @joe-a-t 👋 Thanks for the follow up question here! For the AWS provider v5 release, we've decided against removing the deprecated arguments for the S3 bucket resource, as we imagine there's still quite a number of folks who have yet to make the transition. At the time of writing, we plan on removing those deprecated arguments in v6 instead.

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Apr 28, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement Requests to existing resources that expand the functionality or scope. service/s3 Issues and PRs that pertain to the s3 service.
Projects
None yet
Development

Successfully merging a pull request may close this issue.