Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

An additional resource group is created when deploying AKS #3

Closed
OguzPastirmaci opened this issue Oct 25, 2017 · 111 comments
Closed

An additional resource group is created when deploying AKS #3

OguzPastirmaci opened this issue Oct 25, 2017 · 111 comments

Comments

@OguzPastirmaci
Copy link

When deploying an AKS cluster, an additional resource group is created.

The resource group that I created and deployed AKS to:

az resource list -g oguzp-aks
Name       ResourceGroup    Location    Type                                        Status
---------  ---------------  ----------  ------------------------------------------  --------
oguzp-aks  oguzp-aks        westus2     Microsoft.ContainerService/managedClusters

The resource group that was created automatically:

az resource list -g MC_oguzp-aks_oguzp-aks_westus2
Name                                                                 ResourceGroup                   Location    Type                                          Status
-------------------------------------------------------------------  ------------------------------  ----------  --------------------------------------------  --------
agentpool1-availabilitySet-14710316                                  MC_oguzp-aks_oguzp-aks_westus2  westus2     Microsoft.Compute/availabilitySets
aks-agentpool1-14710316-0_OsDisk_1_fff6a42716dd4dc0a1032afc1cb67091  MC_OGUZP-AKS_OGUZP-AKS_WESTUS2  westus2     Microsoft.Compute/disks
aks-agentpool1-14710316-1_OsDisk_1_ff41571a0143470bbfc3b62653df5c2c  MC_OGUZP-AKS_OGUZP-AKS_WESTUS2  westus2     Microsoft.Compute/disks
aks-agentpool1-14710316-2_OsDisk_1_eec530dff34d4bdf80bbac8e74f5e07d  MC_OGUZP-AKS_OGUZP-AKS_WESTUS2  westus2     Microsoft.Compute/disks
aks-agentpool1-14710316-0                                            MC_oguzp-aks_oguzp-aks_westus2  westus2     Microsoft.Compute/virtualMachines
aks-agentpool1-14710316-0/cse0                                       MC_oguzp-aks_oguzp-aks_westus2  westus2     Microsoft.Compute/virtualMachines/extensions
aks-agentpool1-14710316-0/OmsAgentForLinux                           MC_OGUZP-AKS_OGUZP-AKS_WESTUS2  westus2     Microsoft.Compute/virtualMachines/extensions
aks-agentpool1-14710316-1                                            MC_oguzp-aks_oguzp-aks_westus2  westus2     Microsoft.Compute/virtualMachines
aks-agentpool1-14710316-1/cse1                                       MC_oguzp-aks_oguzp-aks_westus2  westus2     Microsoft.Compute/virtualMachines/extensions
aks-agentpool1-14710316-1/OmsAgentForLinux                           MC_OGUZP-AKS_OGUZP-AKS_WESTUS2  westus2     Microsoft.Compute/virtualMachines/extensions
aks-agentpool1-14710316-2                                            MC_oguzp-aks_oguzp-aks_westus2  westus2     Microsoft.Compute/virtualMachines
aks-agentpool1-14710316-2/cse2                                       MC_oguzp-aks_oguzp-aks_westus2  westus2     Microsoft.Compute/virtualMachines/extensions
aks-agentpool1-14710316-2/OmsAgentForLinux                           MC_OGUZP-AKS_OGUZP-AKS_WESTUS2  westus2     Microsoft.Compute/virtualMachines/extensions
aks-agentpool1-14710316-nic-0                                        MC_oguzp-aks_oguzp-aks_westus2  westus2     Microsoft.Network/networkInterfaces
aks-agentpool1-14710316-nic-1                                        MC_oguzp-aks_oguzp-aks_westus2  westus2     Microsoft.Network/networkInterfaces
aks-agentpool1-14710316-nic-2                                        MC_oguzp-aks_oguzp-aks_westus2  westus2     Microsoft.Network/networkInterfaces
aks-agentpool-14710316-nsg                                           MC_oguzp-aks_oguzp-aks_westus2  westus2     Microsoft.Network/networkSecurityGroups
aks-agentpool-14710316-routetable                                    MC_oguzp-aks_oguzp-aks_westus2  westus2     Microsoft.Network/routeTables
aks-vnet-14710316                                                    MC_oguzp-aks_oguzp-aks_westus2  westus2     Microsoft.Network/virtualNetworks
@OguzPastirmaci OguzPastirmaci changed the title 2 resource groups are created when deploying AKS An additional resource group is created when deploying AKS Oct 25, 2017
@anhowe
Copy link

anhowe commented Oct 25, 2017

@OguzPastirmaci this is by design. This second resource group is the "cluster resource group" and is used to represent and hold the lifecycle of resources underneath it. What is the impact you are seeing by having the second resource group?

@OguzPastirmaci
Copy link
Author

I guess there isn't a significant impact but it's just the fact that the experience is different than ACS before. I created an ACS cluster and didn't have a second resource group so I wasn't expecting a second resource group that I don't have any control over the naming etc.

Would that mean that I won't be able to deploy an AKS cluster if I have access rights only to a resource group in a subscription and not the subscription itself?

@rspaulino
Copy link

@anhowe I agree with @OguzPastirmaci too, for testing and dev is probably Ok but on production on a large company you want to have control over the resources that you have maybe even tag some resources for billing or department tracking.

Others Azure manage services don't behave this way like manage disk, SQL server, etc so I wonder want change that Microsoft decide to take this route.

@SenthuranSivananthan
Copy link

SenthuranSivananthan commented Oct 26, 2017

The impact that I see with my customer are:

  1. Resource Groups are created with billing tags for charge back. The new cluster RG doesn't contain this information.

  2. Customer has a naming convention for the RGs. The new RGs don't conform with their standards.

  3. Users don't have Owner permissions on the subscription, therefore the newly created RG is not visible to the user that created the cluster. This means, get-credential fails with read permissions. Error:

The client '[email address]' with object id '[guid]' does not have authorization to perform action 'Microsoft.ContainerService/managedClusters/read' over scope '/subscriptions/[guid]/resourceGroups/aks-test1/providers/Microsoft.ContainerService/managedClusters/app1'.

Ideally, we can specify the cluster resource group so that the permissions and tags can be pre-created.

@seanknox
Copy link
Contributor

Resource Groups are created with billing tags for charge back. The new cluster RG doesn't contain this information.

Sorry if this is obtuse, but can you just add the tags?

Customer has a naming convention for the RGs. The new RGs don't conform with their standards.

Is there an operational impact with this?

Users don't have Owner permissions on the subscription, therefore the newly created RG is not visible to the user that created the cluster.

Haven't seen this before. The sub that created the managed cluster should be an owner of the node pool agent group (e.g. MC_cluster_name...).

cc @sgoings @slack in case I'm missing something.

@SenthuranSivananthan
Copy link

Yes, the tags can be added via script so it won't be too big of a workaround. Would be nice if there's a way to override it w/ another switch (i.e. --cluster-resource-group).

No operationally impact with the RG name. It just deviates from the norm that they've defined.

@anhowe
Copy link

anhowe commented Oct 26, 2017

@SenthuranSivananthan great idea on the custom creation, I'll submit a feature request for this.

@evillgenius75
Copy link

I think the bigger issue is around Service Principals. Many times the user deploying will not have the rights on a subscription or be able to create a SP that has Subscription level scope. Many times SPs are handed out by Central IT admin, or security teams in Large orgs and those are also scoped to just the RG needed, not the entire subscription. I think MSI is the answer to all of this, but this is something I have heard from the preview users of ACS RPv2 which also creates another RG for resources and it is not liked by those that have already set governance models on their subscriptions

@SenthuranSivananthan
Copy link

@evillgenius75, SPs also have a lifecycle and the secrets are regenerated based on security policies so we need to enable the ability to change SPs & secrets as well. We can take this one in another thread later.

@weinong
Copy link
Contributor

weinong commented Oct 26, 2017

Regarding SP's scope, you will not need to apply anything at all. The RP will give contributor role to just the RG we create. This change will roll out in a week or two.

@ElectricWarr
Copy link

It's been almost two weeks, can anyone confirm if/when this change will be applied?

@marrobi
Copy link
Contributor

marrobi commented Nov 15, 2017

I think being able to specify a custom name would be a good solution as suggested by @anhowe . Most organizations I work with use naming conventions for resource groups, such as AppName-EnvironmentName-Infra etc.

@yeniklas
Copy link

I need to create a public ip within the same resource-group, is there a way to fetch the resource-group name that a certain cluster is using? And yes, it would be very nice to be able to specify the cluster resource-group explicitly.

@slack
Copy link
Contributor

slack commented Dec 20, 2017

@smoerboegen we do not include the node resource group in the managedCluster API, but that would be a great addition. I'll add that feature to our backlog.

@rwaal
Copy link

rwaal commented Dec 26, 2017

I currently work at a client that has a strict naming convention for resource groups. So the ability to control the name of the cluster resource group is a must-have.

Additionally, it would nice to be able to specify to have all AKS cluster resources in one single resource group. So the AKS cluster, as well as all other cluster resource, such as the VMs and network resources.

@derekperkins
Copy link
Contributor

I also don't understand why the resources are put into a separate group. If I'm concerned that the AKS managed objects are going to clutter my namespace, then I can choose an empty resource group. Is the worry that people are going to edit/delete resources that AKS is relying on, then blame AKS?

@Addisco
Copy link

Addisco commented Feb 8, 2018

I just had the issue that the name of the additional resource group got too long.
I deployed the AKS cluster in a group which name is 35 chars long, the name of the cluster resource is 36 chars long - resulting in 86 chars (including the MC_ prefix and _<regionname> suffix) for the resource group name. As a result the deployment of the availability set fails with the following error:

The entity name 'resourceGroupName' is invalid according to its validation rule: ^[^_\W][\w-._]{0,79}(?<![-.])$.

Nevertheless, the MC_* resource group was created.

@dmiyamasu
Copy link

This enhancement request is preventing us from making the migration from ACS to AKS.
Please take into consideration when prioritizing the backlog.

@mallikharjunrao
Copy link

mallikharjunrao commented Apr 24, 2018

Hi,

Facing the similar issue. Azure creating the additional resource group. And when i am trying to get the credentails getting the below exception on both the actual resourse group and the azure create resource group. Kindly do the needful.

$az acs kubernetes get-credentials --resource-group=devops --name=zoom-k8s
The Resource 'Microsoft.ContainerService/containerServices/zoom-k8s' under resource group 'devops' was not found.
$ az acs kubernetes get-credentials --resource-group=MC_devops_zoom-k8s_westeurope --name=zoom-k8s
The Resource 'Microsoft.ContainerService/containerServices/zoom-k8s' under resource group 'MC_devops_zoom-k8s_westeurope' was not found.

@slack
Copy link
Contributor

slack commented Apr 25, 2018

@mallikharjunrao makes sure you use the aks sub command when pulling credentials for an AKS cluster. From the output, looks like you are runing az acs ...

@gugu91
Copy link

gugu91 commented May 9, 2018

Is there any plan to work on this?

Also, can the second RG be safely inferred?
Let's say that for example I create a cluster called my-aks in my-aks-rg in westeurope, meaning it will always be MC_my-aks-rg_my-aks_westeurope?

It would be handy a command to retrieve this from the CLI (if there is not already) as long as the enhancement is not in place.

@slack
Copy link
Contributor

slack commented May 10, 2018

@gugu91 we do not yet have plans to remove or hide the second resource group, it is something we are investigating. However, for folks who do need to safely acquire the additional resource group, we have added a new property nodeResourceGroup to the 2018-03-31 API:

$ az resource show --api-version 2018-03-31 \
  --namespace Microsoft.ContainerService \
  --resource-type managedClusters \
  -g <RESOURCEGROUP> -n <RESOURCENAME> -o json \
| jq .properties.nodeResourceGroup
"MC_foo-aks-centralus_foo-aks-centralus_centralus"
$

@gugu91
Copy link

gugu91 commented May 10, 2018

@slack Thanks for the prompt response. I am inferring though that this should be used as a last resort. What is the suggested way of retrieving info about the autogenerated resource group? Can I simply use kubectl in order to, for example, retrieve the public IP of a load balancer?

@leonardocastanodiaz
Copy link

I am facing the same issue with the RG and naming conventions, somebody has found a workaround?
Many thanks

@peterwy01
Copy link

Same issue here, please add the possibility to set the name of the node ressource group when creating the aks cluster.

@jnoller
Copy link
Contributor

jnoller commented May 10, 2019

This is in the aks-preview CLI

Tag inheritance and passing in pre-existing RGs will be further enhancements

@ahsan3216
Copy link

Hi - I believe there is no way to provide node-resource-group in ansible azure_rm_aks module as well. It is failing when there is resource policy defined to add mandatory tags. Any workarounds?

@spaelling
Copy link

This is in the aks-preview CLI

Tag inheritance and passing in pre-existing RGs will be further enhancements

When will we see this? It is fairly annoying having to disable a policy that enforces certain tags on an entire subscription just to create the AKS cluster.
If the resource group is completely empty and has no conflicting tags it should be ok.

@jluk
Copy link
Contributor

jluk commented Jul 22, 2019

We recently released functionality that will pass tags set on your AKS RG through to the underlying IaaS RG which the service creates. Could you try a deployment with the policy enforced tags on the AKS RG at cluster create time to see if it passes your policy setting?

A quick write-up on this release was included here:
https://github.com/Azure/AKS/releases/tag/2019-07-01

@spaelling
Copy link

We recently released functionality that will pass tags set on your AKS RG through to the underlying IaaS RG which the service creates. Could you try a deployment with the policy enforced tags on the AKS RG at cluster create time to see if it passes your policy setting?

A quick write-up on this release was included here:
https://github.com/Azure/AKS/releases/tag/2019-07-01

I just did that today. Both the AKS RG and the node RG should have the same tags, so that part did not work. I did the deployment using an ARM template, does it only work with az aks create (preview version i presume)?

@jluk
Copy link
Contributor

jluk commented Jul 22, 2019

It should work with ARM template, could you share a few things to help us debug?

  1. the snippet for the tags on the template
  2. examples of the policies being applied
  3. exact error returned upon deployment

@spaelling
Copy link

It should work with ARM template, could you share a few things to help us debug?

1. the snippet for the tags on the template

2. examples of the policies being applied

3. exact error returned upon deployment

It works when tagging the AKS cluster itself during creation, those tags do carry-over. Tags on the resource group does nothing.

We recently released functionality that will pass tags set on your AKS RG through to the underlying IaaS RG which the service creates.

@jluk
Copy link
Contributor

jluk commented Jul 23, 2019

Could you please share a repro with an example policy? I'm not 100% clear on which resources are not being passed and if your policy is an append. If a resource can't be created because an error is kicked back the enforce handling needs to be looked at, which we can do if we can get repro details.

An alternative option is you can open a support ticket on this with details for us to gather details in a more private forum.

@spaelling
Copy link

Could you please share a repro with an example policy? I'm not 100% clear on which resources are not being passed and if your policy is an append. If a resource can't be created because an error is kicked back the enforce handling needs to be looked at, which we can do if we can get repro details.

An alternative option is you can open a support ticket on this with details for us to gather details in a more private forum.

The policy effect is

      "then": {
        "effect": "deny"
      }

could that be the reason? It does append tags from the cluster itself, so it works fine for me.

@md2k
Copy link

md2k commented Sep 12, 2019

Hi here, this functionality doesn't work, if i set custom node_resource_group Azure do the job, and resources created under this group name, but API of Azure showing that node_resource_group is old style MC_blablalba so terraform with custom node_resource_group constantly forced to re-build AKS cluster on each run (lifecycle doing trick, but this is more dirty hack rather than solution).

@jluk
Copy link
Contributor

jluk commented Sep 12, 2019

Could you provide the repro steps @md2k? Not sure I understand, it's very difficult to debug any issues without guidance to reproduce.

  1. Deploy a cluster with a custom name for the node RG.
  2. Get the cluster details via AKS API - it's showing a different RG name?

Impact: Terraform will rebuild the entire cluster on subsequent updates to the cluster because it believes the desired state with a custom name to be an entirely different cluster?

@palma21
Copy link
Member

palma21 commented Sep 16, 2019

Hi,

I also could not repro, on the API I find the node_resource_group name I provided.

Could you provide repro steps or open a ticket with us?

  "loadBalancerSku": "standard",
   "networkPlugin": "azure",
   "networkPolicy": null,
   "podCidr": null,
   "serviceCidr": "10.0.0.0/16"
 },
 "nodeResourceGroup": "infra-aksmgmt-demo1",
 "provisioningState": "Succeeded",
 "resourceGroup": "aksmgmt-demo-rg",

@nubesoltech
Copy link

I'm fairly new to AKS/Azure. The MCS resource groups continue to be created when creating an AKS cluster. If I'm migrating to a new subscription, do I have to account for both AKS resource groups? The one I created and one that Azure created.

@jluk
Copy link
Contributor

jluk commented Sep 23, 2019

This issue has gotten quite broad and has lost a lot of useful context given how old it is, so I am closing this in favor of new issues to be more concise.

  • (Original ask) Remove the MC_RG required of clusters, which is visible to users: #1231
  • Passing tags or naming a new RG for the MC_RG are features which should work, please open support tickets or new issues w questions if you have problems with those.
  • If you have other questions, please open a new issue or use the referenced one above.

@nubesoltech if migrating yes you need to account for both of the resource groups.

@jluk jluk closed this as completed Sep 23, 2019
@dariuszbz
Copy link

Hello All,

I have the same problem. Customer policy enforces new RG with tags. The node RG is not created and I'm getting an error: "Resource 'DefaultResourceGroup-EUS2' was disallowed by policy. Policy identifiers: '[{"policyAssignment":{"name":"Enforce ....."

my az cli:

az aks create --resource-group $rg `
              --name  $ClusterName `
              --enable-vmss `
              --dns-name-prefix $ClusterName.ToLower() `
              --node-count 2 `
              --node-vm-size $AKS_MasterNodesSize `
              --node-resource-group $rgNode `
              --enable-addons monitoring `
              --kubernetes-version $AKS_Version `
              --service-principal $global:rbac.appId `
              --client-secret $global:rbac.password `
              --generate-ssh-keys `
              --windows-admin-password $Password_Win `
              --windows-admin-username $User_Win `
              --network-plugin azure `
              --location $ClusterLocation `
              --tags "Billing-01=LM" 

@mkosieradzki
Copy link

mkosieradzki commented Jan 27, 2020

@dariuszbz nope. The error you are having is coming from creation of OMS workspace. Just add --workspace-resource-id and point it out to your existing OMS and you should be fine. It might also come from network creation... so you should also point out a: --vnet-subnet-id to an existing subnet.

The real problem described in this topic would be about an RG with MC_ in name.

@dariuszbz
Copy link

dariuszbz commented Jan 28, 2020 via email

@naren-dremio
Copy link

Does anything break if I move the resources under MC_ RG to the RG where the AKS resource is present?

@jeliasson
Copy link

Does anything break if I move the resources under MC_ RG to the RG where the AKS resource is present?

Yes, and you can’t and shouldn’t.
https://docs.microsoft.com/en-us/azure/aks/faq#why-are-two-resource-groups-created-with-aks

@SamirFarhat
Copy link

@naren-dremio you can create a new cluster and customize the RG names

@CloudA2Z-Code
Copy link

I think the bigger issue is around Service Principals. Many times the user deploying will not have the rights on a subscription or be able to create a SP that has Subscription level scope. Many times SPs are handed out by Central IT admin, or security teams in Large orgs and those are also scoped to just the RG needed, not the entire subscription. I think MSI is the answer to all of this, but this is something I have heard from the preview users of ACS RPv2 which also creates another RG for resources and it is not liked by those that have already set governance models on their subscriptions

This existence of MC_RG becomes even more problemetic if you are using your cluster in Kubernetes Multitenancy mode , where the disks which will spin by variuos cxs will start lying in this MC_ RG and How one can maintain cxs secondary resources like DB, storage etc. which are connected to AKS cluster, in multi shared cluster env ?

@bhicks329
Copy link

@TheAzureGuy007 - You can put your disks up in any RG you want. You just have to make sure the AKS cluster principles have contributor access to the disks.

https://docs.microsoft.com/en-gb/azure/aks/azure-disk-volume#mount-disk-as-volume

@pniederlag
Copy link

@TheAzureGuy007 - You can put your disks up in any RG you want. You just have to make sure the AKS cluster principles have contributor access to the disks.

https://docs.microsoft.com/en-gb/azure/aks/azure-disk-volume#mount-disk-as-volume

that's great news, wish I had knewn that erlier, will give it a try

@emacdona
Copy link

emacdona commented Apr 9, 2020

My problem with this is that I'm not the Subscription Owner. The Subscription Owner has created a Resource Group for me, and I am free to create resources within that group. I am able to create an AKS cluster in that group, however I can't see any resources it uses (VMs, Disks, etc) because I don't have access to the resource group it creates them in. While I can still use the cluster, I can't do things like backup up the Azure Disks backing the K8s Persistent Volumes.

Is there a way for my Subscription Owner to give me access to all Resource Groups that are created by Azure infrastructure on behalf of actions I take? Like is there a role he could assign me to to fix this problem?

@pniederlag
Copy link

@emaconda

Is there a way for my Subscription Owner to give me access to all Resource Groups that are created by Azure infrastructure on behalf of actions I take? Like is there a role he could assign me to to fix this problem?

Our Owners just added a "Contributor"-Role to the scope of the "MC-*" Resource-Group. With that role I can even access the Storage-Accounts inside.

@atrauzzi
Copy link

atrauzzi commented May 4, 2020

#1231 was deleted, any reason why? Where does the thread pick up at this point?

Why can't the resources simply be in the same resource group in which the AKS service was provisioned?

@ghost ghost locked as resolved and limited conversation to collaborators Aug 13, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests