Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[AVM Module Issue]: Mixing workspace types in avm/res/container-service/managed-cluster #3670

Open
1 task done
mriuttam opened this issue Oct 30, 2024 · 2 comments
Open
1 task done
Assignees
Labels
Class: Resource Module 📦 This is a resource module Needs: Triage 🔍 Maintainers need to triage still Type: AVM 🅰️ ✌️ Ⓜ️ This is an AVM related issue Type: Bug 🐛 Something isn't working

Comments

@mriuttam
Copy link

Check for previous/existing GitHub issues

  • I have checked for previous/existing GitHub issues

Issue Type?

Bug

Module Name

avm/res/container-service/managed-cluster

(Optional) Module Version

0.4.1

Description

In Azure, Log Analytics Workspace and Monitor Workspace are two different types of resources. However, in the current main branch of container-service/managed-cluster, log analytics workspace is used interchangeably with monitor workspace at the following code lines:

Is this intentional?

Practically, this leads to situation, where the first deployment of an AKS cluster seems to succeed, but all the following deployments will fail due to errors such as:

"AddContainerInsightsSolutionError": 'Unable to add ContainerInsights solution. […] Message="The value supplied is not a valid Workspace resource Id." […] Target="properties.workspaceResourceId" […]

A potential reason for this behaviour is that at the first run the monitor workspace is empty (blob storage), but then on the following runs, the monitor workspace is initialised as a “monitoring blob storage”.

Nevertheless, ARM seems to want a resource ID of a log analytics workspace for container insights, but a monitoring workspace ID is passed here instead: line 770

When it comes to container insights, I was able to fix the issue by altering the managed-cluster AVM module by adding a new parameter, containerInsightsLawResourceId, and passing it a valid log analytics workspace id:

// Adding new param..
@description('Optional. Resource ID of the log analytics workspace.’)
param containerInsightsLawResourceId string?

[…]

// ..And then, at the lines [766…776](https://github.com/Azure/bicep-registry-modules/blob/main/avm/res/container-service/managed-cluster/main.bicep#L766-L776):
      containerInsights: enableContainerInsights
        ? {
            enabled: enableContainerInsights
            logAnalyticsWorkspaceResourceId: !empty(containerInsightsLawResourceId)
              ? containerInsightsLawResourceId
              : null
            disableCustomMetrics: disableCustomMetrics
            disablePrometheusMetricsScraping: disablePrometheusMetricsScraping
            syslogPort: syslogPort
          }
        : null

Generally, here’s the parameters I’m using when calling the managed-cluster AVM module:

module managedClusters 'br/public:avm/res/container-service/managed-cluster:0.4.1' = {
  name: 'deploy-aks'
  params: {
    name: clusterName
    tags: tags
    location: location
    agentPools: […]
    primaryAgentPoolProfiles: […]
    networkPlugin: 'kubenet'
    enableTelemetry: false
    roleAssignments: []
    monitoringWorkspaceResourceId: mw.id // points to an instance of 'Microsoft.Monitor/accounts'
    omsAgentEnabled: true
    enableContainerInsights: false
    enableAzureMonitorProfileMetrics: true
    diagnosticSettings: [
      {
        name: 'aks'
        workspaceResourceId: law.id // points to an instance of 'Microsoft.OperationalInsights/workspaces'
        logCategoriesAndGroups: […]
      }
    ]
    autoUpgradeProfileUpgradeChannel: 'patch'
    managedIdentities: {
      userAssignedResourcesIds: […]
    }
    dnsPrefix: clusterName
    networkPolicy: 'calico'
    nodeResourceGroup: nodeResourceGroup
    enablePrivateCluster: true
    privateDNSZone: privateDNSZone
    outboundType: outboundType
    enableKeyvaultSecretsProvider: true
    enableSecretRotation: true
    enableOidcIssuerProfile: true
    aadProfileAdminGroupObjectIDs: aadProfileAdminGroupObjectIDs
    enableWorkloadIdentity: true
  }
}

And for the hot fixed module version, I'm passing otherwise the same parameters, but adding along containerInsightsLawResourceId, which gets passed to container insights:

module managedClusters 'managed-cluster-0.4.1-hotfix/main.bicep' = {
  name: 'deploy-aks'
  params: {
    …
    containerInsightsLawResourceId: law.id // points to an instance of 'Microsoft.OperationalInsights/workspaces'
    …
  }
}

Are you able to reproduce the issue?

Thanks in advance

(Optional) Correlation Id

No response

@mriuttam mriuttam added Needs: Triage 🔍 Maintainers need to triage still Type: AVM 🅰️ ✌️ Ⓜ️ This is an AVM related issue labels Oct 30, 2024

Important

The "Needs: Triage 🔍" label must be removed once the triage process is complete!

Tip

For additional guidance on how to triage this issue/PR, see the BRM Issue Triage documentation.

@microsoft-github-policy-service microsoft-github-policy-service bot added the Type: Bug 🐛 Something isn't working label Oct 30, 2024
@avm-team-linter avm-team-linter bot added the Class: Resource Module 📦 This is a resource module label Oct 30, 2024
Copy link

@mriuttam, thanks for submitting this issue for the avm/res/container-service/managed-cluster module!

Important

A member of the @Azure/avm-res-containerservice-managedcluster-module-owners-bicep or @Azure/avm-res-containerservice-managedcluster-module-contributors-bicep team will review it soon!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Class: Resource Module 📦 This is a resource module Needs: Triage 🔍 Maintainers need to triage still Type: AVM 🅰️ ✌️ Ⓜ️ This is an AVM related issue Type: Bug 🐛 Something isn't working
Projects
Status: Needs: Triage
Development

No branches or pull requests

2 participants