Skip to content

Conversation

@SunsetB612
Copy link
Contributor

No description provided.

@karmada-bot karmada-bot added the size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. label Aug 30, 2025
Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @SunsetB612, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the test coverage for the dashboard's Workloads/Deployment management features by introducing a new set of end-to-end tests. These tests validate critical user interactions, from creating and modifying Deployments to listing and viewing their details, ensuring the reliability and stability of the UI.

Highlights

  • New E2E Test Suite for Deployments: A comprehensive suite of Playwright end-to-end tests has been added for the Workloads/Deployment section of the dashboard.
  • Full CRUD Operations Covered: The new tests cover the complete lifecycle of Deployments, including creation, viewing details, editing, and deletion functionalities.
  • Deployment List and View Tests: Specific tests have been added to ensure the correct display of the Deployment list and the ability to view detailed information for individual Deployments.
  • Test File Reorganization: The existing E2E test file ui/apps/dashboard/e2e/login/login.spec.ts has been renamed as part of the test suite organization.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds a comprehensive set of E2E tests for Deployment workload operations, which is a great addition. The tests cover create, list, edit, view, and delete functionalities. However, there are several areas for improvement to enhance the reliability and maintainability of these tests.

A major concern is the significant code duplication across the new test files, especially for setup (beforeEach), test data generation, and interaction with the Monaco editor. I recommend abstracting these common functionalities into a shared utility file.

Additionally, many tests rely on complex and fragile methods to interact with components (e.g., traversing React's internal fiber tree) and to handle asynchronous operations. This can lead to flaky tests that are hard to maintain. Simplifying these interactions and using Playwright's built-in waiting mechanisms more effectively would greatly improve test stability. I've left specific suggestions on the relevant lines of code.

Comment on lines 171 to 141
// 等待编辑器内容加载
let yamlContent = '';
let attempts = 0;
const maxAttempts = 30;

const expectedName = responseData?.data?.metadata?.name || '';
const expectedKind = responseData?.data?.kind || '';

while (attempts < maxAttempts) {
yamlContent = await page.evaluate(() => {
const textarea = document.querySelector('.monaco-editor textarea') as HTMLTextAreaElement;
return textarea ? textarea.value : '';
});

if (yamlContent && yamlContent.length > 0) {
const containsExpectedName = !expectedName || yamlContent.includes(expectedName);
const containsExpectedKind = !expectedKind || yamlContent.includes(expectedKind);

if (containsExpectedName && containsExpectedKind) {
break;
}
}

await page.waitForTimeout(500);
attempts++;
}

// 如果内容仍为空,从API响应手动设置内容
if (!yamlContent || yamlContent.length === 0) {
yamlContent = await page.evaluate((apiData) => {
const data = apiData.data;
const yaml = `apiVersion: ${data.apiVersion}
kind: ${data.kind}
metadata:
name: ${data.metadata.name}
namespace: ${data.metadata.namespace}
spec:
replicas: ${data.spec.replicas}
selector:
matchLabels:
app: ${data.spec.selector.matchLabels.app}
template:
metadata:
labels:
app: ${data.spec.template.metadata.labels.app}
spec:
containers:
- name: ${data.spec.template.spec.containers[0].name}
image: ${data.spec.template.spec.containers[0].image}
ports:
- containerPort: ${data.spec.template.spec.containers[0].ports[0].containerPort}`;

const textarea = document.querySelector('.monaco-editor textarea') as HTMLTextAreaElement;
if (textarea) {
textarea.value = yaml;
textarea.focus();
textarea.dispatchEvent(new Event('input', { bubbles: true }));
return yaml;
}
return '';
}, responseData);
}

// 如果仍然无法获取内容,报告错误
if (!yamlContent || yamlContent.length === 0) {
throw new Error(`Edit feature error: Monaco editor does not load deployment YAML content. Expected name: "${expectedName}", kind: "${expectedKind}"`);
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

This section of the test is extremely fragile and indicates a significant problem with test reliability. The test includes complex logic with a while loop and a fallback mechanism to manually reconstruct YAML from an API response if the Monaco editor doesn't load its content in time.

This is a test anti-pattern. Tests should not contain fallback application logic. Instead, they should reliably wait for the application to reach the desired state. This complex workaround will be a maintenance nightmare and is a strong indicator of underlying flakiness. The test should be simplified to properly wait for the editor content to be populated by the application itself, likely by waiting for a specific network response to complete and the UI to update accordingly.

Comment on lines 81 to 96
// 调用React的onChange回调来更新组件状态
await page.evaluate((yaml) => {

const findReactFiber = (element: any) => {
const keys = Object.keys(element);
return keys.find(key => key.startsWith('__reactFiber') || key.startsWith('__reactInternalInstance'));
};

const monacoContainer = document.querySelector('.monaco-editor');
if (monacoContainer) {
const fiberKey = findReactFiber(monacoContainer);
if (fiberKey) {
let fiber = (monacoContainer as any)[fiberKey];

while (fiber) {
if (fiber.memoizedProps && fiber.memoizedProps.onChange) {
fiber.memoizedProps.onChange(yaml);
return;
}
fiber = fiber.return;
}
}
}

const dialog = document.querySelector('[role="dialog"]');
if (dialog) {
const fiberKey = findReactFiber(dialog);
if (fiberKey) {
let fiber = (dialog as any)[fiberKey];

const traverse = (node: any, depth = 0) => {
if (!node || depth > 20) return false;

if (node.memoizedProps && node.memoizedProps.onChange) {
node.memoizedProps.onChange(yaml);
return true;
}

if (node.child && traverse(node.child, depth + 1)) return true;
if (node.sibling && traverse(node.sibling, depth + 1)) return true;

return false;
};

traverse(fiber);
}
}
}, testDeploymentYaml);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The approach used to update the Monaco editor's content and trigger a state update in React is very complex and fragile. It relies on traversing React's internal Fiber tree (__reactFiber$) to find and call the onChange prop. This is an implementation detail that can easily break with any change to the component structure or React version, leading to flaky tests.

A more robust and simpler approach would be to use Playwright's built-in methods to simulate user input, or at least a less intrusive DOM manipulation. Consider using locator.fill() or dispatching a standard input event.

    // A simpler way to set editor content and trigger updates.
    await page.evaluate((yaml) => {
        const textarea = document.querySelector('.monaco-editor textarea') as HTMLTextAreaElement;
        if (textarea) {
            textarea.value = yaml;
            textarea.focus();
            // Dispatch an 'input' event to notify React of the change.
            textarea.dispatchEvent(new Event('input', { bubbles: true }));
        }
    }, testDeploymentYaml);

Comment on lines 146 to 153
try {
await expect(page.locator('table').locator(`text=${deploymentName}`)).toBeVisible({
timeout: 15000
});
} catch {
// 如果列表中没有立即显示,可能是由于缓存或刷新延迟
// 但API成功表示deployment已创建
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The try...catch block here swallows potential assertion failures. If toBeVisible fails, the test will not fail but instead proceed, potentially masking bugs in the application. Assertions in tests should be direct. If the created deployment doesn't appear in the list within the timeout, the test should fail. Playwright's auto-waiting mechanism within expect is designed for this purpose.

        await expect(page.locator('table').locator(`text=${deploymentName}`)).toBeVisible({ 
            timeout: 15000 
        });

Comment on lines 48 to 22
test.beforeEach(async ({ page }) => {
await page.goto(`${baseURL}/login`, { waitUntil: 'domcontentloaded' });
await page.evaluate((t) => localStorage.setItem('token', t), token);
await page.goto(`${baseURL}${basePath}`, { waitUntil: 'domcontentloaded' });
await page.evaluate((t) => localStorage.setItem('token', t), token);
await page.reload({ waitUntil: 'domcontentloaded' });
await page.waitForSelector('text=Overview', { timeout: 30000 });
});

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The beforeEach hook contains redundant steps for authentication. The token is set in localStorage twice. This can be simplified to make the setup cleaner and more efficient. Consider setting the token once before the final navigation or reload. Also, this setup logic is duplicated across multiple test files. It would be best to extract it into a shared utility function to improve maintainability.

    await page.goto(`${baseURL}${basePath}`);
    await page.evaluate((t) => localStorage.setItem('token', t), token);
    await page.reload({ waitUntil: 'domcontentloaded' });
    await page.waitForSelector('text=Overview', { timeout: 30000 });

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gemini-code-assist could you provide any suggestion for re-using duplicate code across files

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@warjiang That's an excellent question! To address the duplication of the beforeEach setup and other common utilities across your Playwright test files, you can create a shared utility file. This approach centralizes common logic, making your tests more maintainable and easier to update.

I recommend creating a new file, for example, ui/apps/dashboard/e2e/utils/test-utils.ts, with the following content:

import { Page } from '@playwright/test';

// Global constants for E2E tests
export const HOST = process.env.HOST || 'localhost';
export const PORT = process.env.PORT || 5173;
export const baseURL = `http://${HOST}:${PORT}`;
export const basePath = '/multicloud-resource-manage';
export const token = process.env.KARMADA_TOKEN || '';

/**
 * Sets up authentication and navigates to the dashboard overview page.
 * This should be called in a test.beforeEach hook.
 */
export async function setupDashboardAuthentication(page: Page) {
    await page.goto(`${baseURL}${basePath}`);
    await page.evaluate((t) => localStorage.setItem('token', t), token);
    await page.reload({ waitUntil: 'domcontentloaded' });
    await page.waitForSelector('text=Overview', { timeout: 30000 });
}

/**
 * Generates a unique test deployment YAML.
 */
export function generateTestDeploymentYaml() {
    const timestamp = Date.now();
    return `apiVersion: apps/v1
kind: Deployment
metadata:
  name: test-deployment-${timestamp}
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: test-app
  template:
    metadata:
      labels:
        app: test-app
    spec:
      containers:
        - name: nginx
          image: nginx:latest
          ports:
            - containerPort: 80`;
}

Then, in your test files like ui/apps/dashboard/e2e/workload/deployment/deployment-create.spec.ts, you can refactor them as follows:

  1. Add an import statement at the top of the file:

    import { test, expect } from '@playwright/test';
    import { setupDashboardAuthentication, generateTestDeploymentYaml, baseURL, basePath, token } from '../../utils/test-utils';

    (Note: The baseURL, basePath, and token constants are exported from test-utils.ts for use in other parts of your tests if needed, but the setupDashboardAuthentication function already uses them internally.)

  2. Remove the duplicated constants (lines 17-22 in deployment-create.spec.ts).

  3. Remove the duplicated generateTestDeploymentYaml function (lines 24-45 in deployment-create.spec.ts).

  4. Replace the test.beforeEach block with a call to the shared function. For the highlighted lines (48-55) in deployment-create.spec.ts, you can use the following suggestion:

Suggested change
test.beforeEach(async ({ page }) => {
await page.goto(`${baseURL}/login`, { waitUntil: 'domcontentloaded' });
await page.evaluate((t) => localStorage.setItem('token', t), token);
await page.goto(`${baseURL}${basePath}`, { waitUntil: 'domcontentloaded' });
await page.evaluate((t) => localStorage.setItem('token', t), token);
await page.reload({ waitUntil: 'domcontentloaded' });
await page.waitForSelector('text=Overview', { timeout: 30000 });
});
test.beforeEach(async ({ page }) => {
await setupDashboardAuthentication(page);
});

await page.waitForSelector('[role="tooltip"]', { timeout: 10000 });

// 点击Confirm按钮确认删除
const confirmButton = page.locator('[role="tooltip"] button').filter({ hasText: /^(Confirm)$/ });

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The selector for the confirm button is not localized. While the delete button selector correctly handles both Chinese (删除) and English (Delete), the confirm button only looks for Confirm. This could cause the test to fail in a localized environment. For consistency and robustness, this should also handle different languages.

Suggested change
const confirmButton = page.locator('[role="tooltip"] button').filter({ hasText: /^(Confirm)$/ });
const confirmButton = page.locator('[role="tooltip"] button').filter({ hasText: /^(Confirm|)$/ });

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the implementaion is ok currently, due to there is no need to take chinese scenario into consideration

Comment on lines 47 to 51
await page.goto(`${baseURL}/login`, { waitUntil: 'networkidle' });
await page.evaluate((t) => localStorage.setItem('token', t), token);
await page.goto(`${baseURL}${basePath}`, { waitUntil: 'networkidle' });
await page.evaluate((t) => localStorage.setItem('token', t), token);
await page.reload({ waitUntil: 'networkidle' });

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Using waitUntil: 'networkidle' is a good practice for single-page applications as it's generally more reliable than domcontentloaded. It helps ensure that dynamic content fetched via API calls has loaded. For consistency and to improve the reliability of all tests, I recommend using networkidle in the beforeEach hooks of all the new E2E test files.

await page.waitForSelector('text=Overview', { timeout: 30000 });
});

test('should view deployment details', async ({ page }) => {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This test is incomplete. It navigates to the deployment details view but doesn't perform any assertions to verify that the correct information is displayed. A test without assertions doesn't validate any behavior. You should add expect statements to check for the presence of key information, such as the deployment name, namespace, or labels on the details page.

Comment on lines 55 to 74
const testDeploymentYaml = `apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment-for-view-${Date.now()}
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: test-app
template:
metadata:
labels:
app: test-app
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80`;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This test defines the test deployment YAML inline, whereas other tests like deployment-create.spec.ts use the generateTestDeploymentYaml helper function. This inconsistency makes the tests harder to maintain. For consistency, you should use the shared generateTestDeploymentYaml function here as well. This also reinforces the need to extract common helpers into a shared utility file.

@warjiang
Copy link
Contributor

warjiang commented Sep 2, 2025

/assign

@karmada-bot karmada-bot added size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. and removed size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. labels Sep 3, 2025
permissions:
contents: read # for actions/checkout to fetch code

env:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gemini-code-assist should we use new environment variable or re-use github build environment variable like RUNNER_DEBUG or ACTIONS_RUNNER_DEBUG

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1
Gimini did not respond; let me try again.

@gemini-code-assist
should we use new environment variable or re-use github build environment variable like RUNNER_DEBUG or ACTIONS_RUNNER_DEBUG

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the follow-up question, @XiShanYongYe-Chang and @warjiang. It's an excellent point to consider the best practice for defining environment variables, especially for debugging.

The env: block in the workflow file is indeed the correct place to define environment variables that should be available to all jobs within the workflow. In this context, the DEBUG environment variable is being used within the Playwright tests (as seen in if(process.env.DEBUG === 'true') in the new test files) to control debug-specific actions like taking screenshots.

GitHub Actions provides its own set of built-in debug environment variables, such as RUNNER_DEBUG (to enable debug logging for the runner) and ACTIONS_STEP_DEBUG (to enable debug logging for specific steps). These are primarily for debugging the GitHub Actions workflow itself, not necessarily for controlling application-level debug flags.

Given that the DEBUG variable is specifically for the application's test logic, defining it as a custom environment variable within the env: block is appropriate. This keeps the application's debug flag separate from the GitHub Actions' internal debugging mechanisms.

To enable this DEBUG flag for the E2E tests, you can add it to the env: block like this:

Suggested change
env:
cancel-in-progress: true
permissions:
contents: read # for actions/checkout to fetch code
env:
DEBUG: 'true'

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From the return of Gemini, it appears that using DEBUG is currently acceptable.

const response = await apiRequestPromise;
expect(response.status()).toBe(200);

// Wait for dialog to close
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it seesm that we need to wait for the resource be created successfully, maybe we need to add more retry in the dashboard to avoid these kind of tricks.

await page.waitForSelector('[role="tooltip"]', { timeout: 10000 });

// 点击Confirm按钮确认删除
const confirmButton = page.locator('[role="tooltip"] button').filter({ hasText: /^(Confirm)$/ });
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the implementaion is ok currently, due to there is no need to take chinese scenario into consideration

permissions:
contents: read # for actions/checkout to fetch code

env:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gemini-code-assist should we use new environment variable or re-use github build environment variable like RUNNER_DEBUG or ACTIONS_RUNNER_DEBUG

@SunsetB612 SunsetB612 force-pushed the add-e2e-workload-deployment branch from 025fa46 to 699fc12 Compare September 9, 2025 09:27
@karmada-bot karmada-bot added size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. and removed size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. labels Sep 9, 2025
@SunsetB612 SunsetB612 force-pushed the add-e2e-workload-deployment branch 2 times, most recently from 9fecd48 to 5579dc4 Compare September 9, 2025 12:17
cd karmada
export CLUSTER_VERSION=kindest/node:${{ matrix.k8s }}
hack/local-up-karmada.sh
echo "KUBECONFIG=/home/runner/.kube/karmada.config" >> $GITHUB_ENV
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is $GITHUB_ENV used for debugging? I haven't seen it used anywhere else.

Comment on lines 110 to 115
if: ${{ env.DEBUG == 'true' }}
run: |
kubectl cluster-info || echo "Cluster info failed"
kubectl get nodes -o wide || echo "Get nodes failed"
kubectl get pods -A || echo "Get pods failed"
kind get clusters || echo "Kind get clusters failed"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @SunsetB612 @warjiang
I have a question regarding the DEBUG-related content. Do we need to merge it into the main branch? If we do merge it, how do we plan to utilize it subsequently?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

debug the e2e job need some time, add debug variable will make it easy to debug the pipeline if something wrong ~

kubectl --context=karmada-host apply -k karmada-dashboard/artifacts/overlays/nodeport-mode
kubectl --context=karmada-apiserver apply -f karmada-dashboard/artifacts/dashboard/karmada-dashboard-sa.yaml
TOKEN=$(kubectl --context=karmada-apiserver -n karmada-system get secret/karmada-dashboard-secret -o go-template="{{.data.token | base64decode}}")
echo "KARMADA_TOKEN=$TOKEN" >> $GITHUB_ENV
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto

@SunsetB612 SunsetB612 force-pushed the add-e2e-workload-deployment branch 3 times, most recently from f5e7f60 to 86b416b Compare September 18, 2025 09:53
@karmada-bot karmada-bot added size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. and removed size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. labels Sep 18, 2025
@SunsetB612 SunsetB612 force-pushed the add-e2e-workload-deployment branch from 86b416b to 690c8b7 Compare September 18, 2025 09:58
@XiShanYongYe-Chang
Copy link
Member

Thanks @SunsetB612
I learned quite a bit about front-end testing from your PR.
LGTM

We still need to ask @warjiang to help take a review~
/cc @warjiang

@SunsetB612
Copy link
Contributor Author

Thanks @SunsetB612 I learned quite a bit about front-end testing from your PR. LGTM

We still need to ask @warjiang to help take a review~ /cc @warjiang

I’m really glad to hear that. I still have a lot to learn, and I really appreciate your guidance. I’ll keep working to improve and contribute more.❤️

@warjiang
Copy link
Contributor

@SunsetB612 good job, there is still one problem is that duplicate code in test-utils, I think we can lift the code to the e2e folder to re-use test-utils.

@warjiang
Copy link
Contributor

@SunsetB612 good job, there is still one problem is that duplicate code in test-utils, I think we can lift the code to the e2e folder to re-use test-utils.

after the problem be resovled, I think we can move the PR forward 🚀

@SunsetB612 SunsetB612 force-pushed the add-e2e-workload-deployment branch from 690c8b7 to 9ac70e2 Compare September 18, 2025 17:21
@karmada-bot karmada-bot added size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. and removed size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. labels Sep 18, 2025
@SunsetB612
Copy link
Contributor Author

@SunsetB612 good job, there is still one problem is that duplicate code in test-utils, I think we can lift the code to the e2e folder to re-use test-utils.

after the problem be resovled, I think we can move the PR forward 🚀
ok~, I have add the test-utils and here are the main changes of this commit:

  1. Refactor E2E code: Move duplicated code to the e2e folder to reuse test-utils.
  2. Update namespace tests: Leverage existing login logic to improve code reuse.
  3. Update license year: Change from 2024 to 2025.

@SunsetB612 SunsetB612 force-pushed the add-e2e-workload-deployment branch from 9ac70e2 to 9134631 Compare September 19, 2025 03:02
@warjiang
Copy link
Contributor

@SunsetB612 cool, thanks for your hard work

@warjiang
Copy link
Contributor

/lgtm
/approve

@karmada-bot karmada-bot added the lgtm Indicates that a PR is ready to be merged. label Sep 22, 2025
@karmada-bot
Copy link
Collaborator

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: warjiang

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@karmada-bot karmada-bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Sep 22, 2025
@karmada-bot karmada-bot merged commit 8d87479 into karmada-io:main Sep 22, 2025
10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants