Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Add label for the greptimedb services and controllers #223

Merged
merged 1 commit into from
Nov 26, 2024

Conversation

daviderli614
Copy link
Member

@daviderli614 daviderli614 commented Nov 26, 2024

Close: #222

kubectl get svc --show-labels
NAME               TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                               AGE     LABELS
basic-datanode     ClusterIP   None            <none>        4001/TCP,4000/TCP                     3m30s   app.greptime.io/component=basic-datanode
basic-flownode     ClusterIP   None            <none>        4001/TCP                              116s    app.greptime.io/component=basic-flownode
basic-frontend     ClusterIP   10.96.203.28    <none>        4001/TCP,4000/TCP,4002/TCP,4003/TCP   3m24s   app.greptime.io/component=basic-frontend
basic-meta         ClusterIP   10.96.210.136   <none>        3002/TCP,4000/TCP                     3m36s   app.greptime.io/component=basic-meta
basic-standalone   ClusterIP   10.96.195.101   <none>        4001/TCP,4000/TCP,4002/TCP,4003/TCP   22s     app.greptime.io/component=basic-standalone

Summary by CodeRabbit

  • New Features

    • Enhanced Kubernetes resource management by adding labels to Service and StatefulSet objects for better identification and tracking.
    • Improved metadata for deployment and service objects to facilitate service discovery.
  • Bug Fixes

    • No bug fixes were included in this release.
  • Documentation

    • No changes to documentation were made.
  • Refactor

    • Improved labeling mechanisms across various components without altering existing logic or control flow.

@daviderli614 daviderli614 requested a review from zyy17 November 26, 2024 09:00
Copy link
Contributor

coderabbitai bot commented Nov 26, 2024

Walkthrough

The pull request introduces enhancements to the labeling of Kubernetes resources across multiple components, including Datanode, Flownode, Frontend, Meta, and Standalone. Each component's BuildService and BuildStatefulSet or BuildDeployment methods have been updated to include a new Labels field in their respective ObjectMeta, which incorporates the component name derived from the cluster name and component kind. These changes aim to improve the identification and management of resources within the Kubernetes environment while maintaining existing control flows and error handling.

Changes

File Path Change Summary
controllers/greptimedbcluster/deployers/*.go Added labels to Service and StatefulSet/Deployment ObjectMeta in Datanode, Flownode, Frontend, and Meta components.
controllers/greptimedbstandalone/deployer.go Updated labels in BuildService and BuildStatefulSet methods of the Standalone component.

Assessment against linked issues

Objective Addressed Explanation
Add label for the greptimedb service (222)
Add label for the greptimedb deployment (222)

Poem

🐇 In the cluster where labels play,
Each component finds its way.
With names that shine, they stand so tall,
In Kubernetes, they won't fall!
A hop, a skip, they’re all aligned,
Resource tracking, perfectly defined! 🎉


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Outside diff range and nitpick comments (6)
controllers/greptimedbcluster/deployers/meta.go (1)

209-211: Consider extracting label generation to a common function

While the implementation is clean and consistent, consider extracting the label generation logic into a common function in the CommonBuilder to ensure consistent label format across all components and reduce code duplication.

Example implementation:

// In CommonBuilder
func (b *CommonBuilder) GetComponentLabels() map[string]string {
    return map[string]string{
        constant.GreptimeDBComponentName: common.ResourceName(b.Cluster.Name, b.ComponentKind),
    }
}

Also applies to: 244-246

controllers/greptimedbcluster/deployers/frontend.go (1)

166-168: Consider using MergeStringMap for deployment labels.

While the label implementation is functionally correct, consider using util.MergeStringMap to preserve any existing deployment labels, similar to how service labels are handled. This ensures consistency across the codebase and prevents potential loss of labels specified in the CRD.

-			Labels: map[string]string{
-				constant.GreptimeDBComponentName: common.ResourceName(b.Cluster.Name, b.ComponentKind),
-			},
+			Labels: util.MergeStringMap(b.Cluster.Spec.Frontend.Labels, map[string]string{
+				constant.GreptimeDBComponentName: common.ResourceName(b.Cluster.Name, b.ComponentKind),
+			}),
controllers/greptimedbcluster/deployers/datanode.go (4)

Line range hint 391-393: Update selector to match new label key

The StatefulSet's selector must be updated to match the new label key.

Apply this diff to update the selector:

Selector: &metav1.LabelSelector{
    MatchLabels: map[string]string{
-        constant.GreptimeDBComponentName: common.ResourceName(b.Cluster.Name, b.ComponentKind),
+        "greptime.io/component": common.ResourceName(b.Cluster.Name, b.ComponentKind),
    },
},

Line range hint 333-335: Update Service selector to match new label key

The Service's selector must be updated to match the new label key for proper pod selection.

Apply this diff to update the selector:

Selector: map[string]string{
-    constant.GreptimeDBComponentName: common.ResourceName(b.Cluster.Name, b.ComponentKind),
+    "greptime.io/component": common.ResourceName(b.Cluster.Name, b.ComponentKind),
},

Line range hint 494-496: Update Pod template labels to match new label key

The Pod template labels must be updated to match the new label key for proper selection by the Service and StatefulSet.

Apply this diff to update the labels:

podTemplateSpec.ObjectMeta.Labels = util.MergeStringMap(podTemplateSpec.ObjectMeta.Labels, map[string]string{
-    constant.GreptimeDBComponentName: common.ResourceName(b.Cluster.Name, b.ComponentKind),
+    "greptime.io/component": common.ResourceName(b.Cluster.Name, b.ComponentKind),
})

Line range hint 1-700: Consider adding unit tests for label changes

The changes to the label keys are significant and affect multiple resources. It would be beneficial to add unit tests to verify:

  1. Correct label propagation to all resources
  2. Proper selector matching between Service, StatefulSet, and Pods
  3. Backward compatibility with existing deployments

Would you like me to help generate the unit test code for these scenarios?

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 73de659 and e7a29bd.

📒 Files selected for processing (5)
  • controllers/greptimedbcluster/deployers/datanode.go (2 hunks)
  • controllers/greptimedbcluster/deployers/flownode.go (2 hunks)
  • controllers/greptimedbcluster/deployers/frontend.go (2 hunks)
  • controllers/greptimedbcluster/deployers/meta.go (2 hunks)
  • controllers/greptimedbstandalone/deployer.go (2 hunks)
🔇 Additional comments (8)
controllers/greptimedbcluster/deployers/meta.go (1)

209-211: Verify label format matches requirements

The PR objective specifies the label format should be greptime.io/component=${cluster}-${role}, but the current implementation uses a different format. Please verify that constant.GreptimeDBComponentName matches the required format.

✅ Verification successful

Let me gather more information about the ResourceName function implementation since the previous result didn't show its content.


Label format is correctly implemented

The implementation matches the required format greptime.io/component=${cluster}-${role}:

  • The label key is defined as app.greptime.io/component in constant.GreptimeDBComponentName
  • The label value is constructed by common.ResourceName() which concatenates the cluster name and component kind with a hyphen (name + "-" + string(componentKind))
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the label format in constants and its usage

# Check the constant definition
rg -A 1 "GreptimeDBComponentName.*=" 

# Check the ResourceName function implementation
ast-grep --pattern 'func ResourceName($name, $kind) {
  $$$
}'

Length of output: 445


Script:

#!/bin/bash
# Search for ResourceName function implementation
rg -A 5 "func ResourceName"

Length of output: 495

controllers/greptimedbcluster/deployers/frontend.go (2)

130-132: LGTM! Service labels implementation looks good.

The implementation correctly adds the component label while preserving existing service labels. The label format aligns with the PR objectives and maintains consistency with service selectors.


130-132: Verify consistent label implementation across components.

Let's verify that the label implementation is consistent across other components (datanode, meta, etc.) and that these labels are correctly used in selectors and filters.

Also applies to: 166-168

✅ Verification successful

Label implementation is consistent across components

The verification shows consistent label implementation across all components (frontend, meta, datanode, flownode):

  • All components use util.MergeStringMap to merge custom labels with the component identifier
  • The constant.GreptimeDBComponentName label is consistently used with common.ResourceName(b.Cluster.Name, b.ComponentKind)
  • Labels are properly used in corresponding selectors for services and pod templates
  • The implementation is consistent in both standalone and cluster deployments

The current implementation in the frontend service follows the established pattern across the codebase.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check label implementation consistency across components

# Check label implementation in other component files
echo "Checking label implementation in other components..."
rg -A 2 "Labels.*GreptimeDBComponentName" controllers/greptimedbcluster/deployers/

# Check label usage in selectors
echo "Checking label usage in selectors..."
rg -A 2 "Selector.*GreptimeDBComponentName" controllers/greptimedbcluster/deployers/

Length of output: 369


Script:

#!/bin/bash
# Let's try a broader search to understand label implementations and usage

# Check all label implementations in deployers
echo "Checking all label implementations..."
rg "Labels.*MergeStringMap" controllers/greptimedbcluster/deployers/ -B 1 -A 2

# Check component name constant usage
echo "Checking component name constant usage..."
rg "constant\.GreptimeDBComponentName" -B 1 -A 2

# Check service label implementations
echo "Checking service label implementations..."
rg "Service\.Labels" controllers/greptimedbcluster/deployers/ -B 1 -A 2

# Check selector implementations
echo "Checking selector implementations..."
rg "LabelSelector" controllers/greptimedbcluster/deployers/ -B 1 -A 2

Length of output: 15222

controllers/greptimedbcluster/deployers/flownode.go (2)

133-135: LGTM: Service labels implementation is correct

The label implementation for the Service follows Kubernetes best practices and matches the PR objectives. The label format greptime.io/component=${cluster}-${role} is correctly implemented using common.ResourceName().


188-190: LGTM: StatefulSet labels implementation is correct

The label implementation for the StatefulSet maintains consistency with the Service labels and ensures proper pod selection through the StatefulSet's selector.

controllers/greptimedbstandalone/deployer.go (2)

247-249: LGTM! Verify StatefulSet label consistency.

The StatefulSet labeling implementation maintains consistency with the service labeling pattern.

Let's verify label consistency across resources:

✅ Verification successful

Let's also check the constant definition:

And verify the actual usage pattern:


Label consistency verified across resources

Based on the verification results, the label implementation is consistent across all resources:

  • The label key app.greptime.io/component is defined as a constant in controllers/constant/constant.go
  • The same labeling pattern is consistently used across:
    • StatefulSet labels and selector
    • Service labels
    • Pod template labels
    • All cluster components (frontend, datanode, meta, flownode)
    • Standalone deployments

The implementation maintains consistency by using the same label key and value format (common.ResourceName(name, kind)) throughout the codebase.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify label consistency between Services and StatefulSets
# Expected: Both resource types should use the same label format

# Compare label usage in Services and StatefulSets
echo "=== Services ==="
rg --type yaml 'kind:\s+Service' -A 10 | rg 'greptime.io/component'
echo "=== StatefulSets ==="
rg --type yaml 'kind:\s+StatefulSet' -A 10 | rg 'greptime.io/component'

Length of output: 233


Script:

#!/bin/bash
# Let's try a different approach to verify label consistency
# Search for label definitions in the codebase

echo "=== Searching for label definitions in constant package ==="
rg -l "constant\.GreptimeDBComponentName" --type go

echo -e "\n=== Checking label usage in deployer files ==="
fd deployer.go --type f --exec rg -C 5 "Labels.*map\[string\]string"

Length of output: 2301


Script:

#!/bin/bash
echo "=== Checking constant definition ==="
rg "GreptimeDBComponentName.*=" --type go

Length of output: 347


Script:

#!/bin/bash
echo "=== Checking actual label usage patterns ==="
rg "Labels.*map\[string\]string" -A 3 --type go

Length of output: 8984


193-195: LGTM! Verify service label propagation.

The service label implementation correctly merges existing labels with the new component label, maintaining backward compatibility while adding the new standardized labeling.

Let's verify the label propagation to services:

✅ Verification successful

Service label implementation is correct and consistent

The implementation is verified to be correct based on the following findings:

  • The component label key is properly defined as a constant app.greptime.io/component
  • MergeStringMap utility function correctly preserves existing labels while adding new ones
  • The implementation is consistently used across standalone and cluster deployments
  • The label value correctly uses the resource name format combining cluster name and kind

The changes maintain backward compatibility while standardizing component labeling across the codebase.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify that services get the new component label
# Expected: Services should have label "greptime.io/component=${cluster}-standalone"

# Search for service definitions with the new label
rg --type yaml 'kind:\s+Service' -A 10 | rg 'greptime.io/component'

Length of output: 67


Script:

#!/bin/bash
# Let's try a different approach to verify the service label implementation:

# 1. First, check the constant definition for the component label
rg "GreptimeDBComponentName\s*=\s*" -A 1

# 2. Check how the label is used in the codebase
rg "Labels.*MergeStringMap.*Component" -B 2 -A 2

# 3. Look for test cases that verify the label
rg --type go "Test.*Label.*Service" -A 10

Length of output: 303


Script:

#!/bin/bash
# Let's verify the implementation more thoroughly:

# 1. Check all usages of MergeStringMap with labels
rg "MergeStringMap.*Labels" -B 2 -A 2

# 2. Look for the actual implementation of MergeStringMap
ast-grep --pattern 'func MergeStringMap($$$) $_ {
  $$$
}'

# 3. Check for any existing service label tests
rg -g "**/*_test.go" "Labels.*Service" -B 2 -A 2

Length of output: 4294

controllers/greptimedbcluster/deployers/datanode.go (1)

Line range hint 1-700: Verify impact on existing deployments

The label key changes might affect existing deployments during upgrades.

Let's verify the usage of the old label key in the codebase:

Copy link
Collaborator

@zyy17 zyy17 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@zyy17 zyy17 merged commit 90a4b73 into GreptimeTeam:main Nov 26, 2024
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add service and deployment label
2 participants