Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change SKU of [azurerm_databricks_workspace] without recreating the workspace. #9124

Closed
w0ut0 opened this issue Nov 2, 2020 · 3 comments · Fixed by #9541
Closed

Change SKU of [azurerm_databricks_workspace] without recreating the workspace. #9124

w0ut0 opened this issue Nov 2, 2020 · 3 comments · Fixed by #9541

Comments

@w0ut0
Copy link

w0ut0 commented Nov 2, 2020

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Description

The documentation describes that changing the Databricks SKU requires the resource to be created. However, the Azure documentation proposes a way to change the SKU without redeploying the workspace. According to the documentation, you can create a new deployment (with the same name/subscription/resource group), and change the SKU. This should not cause the deletion and recreation of the workspace.

New or Affected Resource(s)

  • azurerm_databricks_workspace

Potential Terraform Configuration

resource "azurerm_databricks_workspace" "example" {
  name                = "databricks-test"
  resource_group_name = azurerm_resource_group.example.name
  location            = azurerm_resource_group.example.location
  sku                 = "standard"

  tags = {
    Environment = "Production"
  }
}

References

@neil-yechenwei
Copy link
Contributor

Thanks for opening this issue. After investigated, seems the scenarios "trial->standard->premium" and "premium->standard" can directly update. Downgrading from "standard" to "trial" doesn't support to directly update, so which deserves "forceNew". So I submit a fix for this issue. Hopes it would be helpful.

@tombuildsstuff tombuildsstuff modified the milestones: v2.40.0, v2.41.0 Dec 10, 2020
@jackofallops jackofallops modified the milestones: v2.41.0, v2.42.0 Dec 17, 2020
@tombuildsstuff tombuildsstuff modified the milestones: v2.42.0, v2.43.0 Jan 7, 2021
katbyte pushed a commit that referenced this issue Jan 13, 2021
…w resource unless it is required (#9541)

Co-authored-by: kt <[email protected]>

fixes #9124
@ghost
Copy link

ghost commented Jan 14, 2021

This has been released in version 2.43.0 of the provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. As an example:

provider "azurerm" {
    version = "~> 2.43.0"
}
# ... other configuration ...

@ghost
Copy link

ghost commented Feb 12, 2021

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

@ghost ghost locked as resolved and limited conversation to collaborators Feb 12, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
4 participants