-
Notifications
You must be signed in to change notification settings - Fork 73
Add E2E tests for Workloads/Deployment in the dashboard #250
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add E2E tests for Workloads/Deployment in the dashboard #250
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @SunsetB612, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request significantly enhances the test coverage for the dashboard's Workloads/Deployment management features by introducing a new set of end-to-end tests. These tests validate critical user interactions, from creating and modifying Deployments to listing and viewing their details, ensuring the reliability and stability of the UI.
Highlights
- New E2E Test Suite for Deployments: A comprehensive suite of Playwright end-to-end tests has been added for the Workloads/Deployment section of the dashboard.
- Full CRUD Operations Covered: The new tests cover the complete lifecycle of Deployments, including creation, viewing details, editing, and deletion functionalities.
- Deployment List and View Tests: Specific tests have been added to ensure the correct display of the Deployment list and the ability to view detailed information for individual Deployments.
- Test File Reorganization: The existing E2E test file
ui/apps/dashboard/e2e/login/login.spec.tshas been renamed as part of the test suite organization.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request adds a comprehensive set of E2E tests for Deployment workload operations, which is a great addition. The tests cover create, list, edit, view, and delete functionalities. However, there are several areas for improvement to enhance the reliability and maintainability of these tests.
A major concern is the significant code duplication across the new test files, especially for setup (beforeEach), test data generation, and interaction with the Monaco editor. I recommend abstracting these common functionalities into a shared utility file.
Additionally, many tests rely on complex and fragile methods to interact with components (e.g., traversing React's internal fiber tree) and to handle asynchronous operations. This can lead to flaky tests that are hard to maintain. Simplifying these interactions and using Playwright's built-in waiting mechanisms more effectively would greatly improve test stability. I've left specific suggestions on the relevant lines of code.
| // 等待编辑器内容加载 | ||
| let yamlContent = ''; | ||
| let attempts = 0; | ||
| const maxAttempts = 30; | ||
|
|
||
| const expectedName = responseData?.data?.metadata?.name || ''; | ||
| const expectedKind = responseData?.data?.kind || ''; | ||
|
|
||
| while (attempts < maxAttempts) { | ||
| yamlContent = await page.evaluate(() => { | ||
| const textarea = document.querySelector('.monaco-editor textarea') as HTMLTextAreaElement; | ||
| return textarea ? textarea.value : ''; | ||
| }); | ||
|
|
||
| if (yamlContent && yamlContent.length > 0) { | ||
| const containsExpectedName = !expectedName || yamlContent.includes(expectedName); | ||
| const containsExpectedKind = !expectedKind || yamlContent.includes(expectedKind); | ||
|
|
||
| if (containsExpectedName && containsExpectedKind) { | ||
| break; | ||
| } | ||
| } | ||
|
|
||
| await page.waitForTimeout(500); | ||
| attempts++; | ||
| } | ||
|
|
||
| // 如果内容仍为空,从API响应手动设置内容 | ||
| if (!yamlContent || yamlContent.length === 0) { | ||
| yamlContent = await page.evaluate((apiData) => { | ||
| const data = apiData.data; | ||
| const yaml = `apiVersion: ${data.apiVersion} | ||
| kind: ${data.kind} | ||
| metadata: | ||
| name: ${data.metadata.name} | ||
| namespace: ${data.metadata.namespace} | ||
| spec: | ||
| replicas: ${data.spec.replicas} | ||
| selector: | ||
| matchLabels: | ||
| app: ${data.spec.selector.matchLabels.app} | ||
| template: | ||
| metadata: | ||
| labels: | ||
| app: ${data.spec.template.metadata.labels.app} | ||
| spec: | ||
| containers: | ||
| - name: ${data.spec.template.spec.containers[0].name} | ||
| image: ${data.spec.template.spec.containers[0].image} | ||
| ports: | ||
| - containerPort: ${data.spec.template.spec.containers[0].ports[0].containerPort}`; | ||
|
|
||
| const textarea = document.querySelector('.monaco-editor textarea') as HTMLTextAreaElement; | ||
| if (textarea) { | ||
| textarea.value = yaml; | ||
| textarea.focus(); | ||
| textarea.dispatchEvent(new Event('input', { bubbles: true })); | ||
| return yaml; | ||
| } | ||
| return ''; | ||
| }, responseData); | ||
| } | ||
|
|
||
| // 如果仍然无法获取内容,报告错误 | ||
| if (!yamlContent || yamlContent.length === 0) { | ||
| throw new Error(`Edit feature error: Monaco editor does not load deployment YAML content. Expected name: "${expectedName}", kind: "${expectedKind}"`); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This section of the test is extremely fragile and indicates a significant problem with test reliability. The test includes complex logic with a while loop and a fallback mechanism to manually reconstruct YAML from an API response if the Monaco editor doesn't load its content in time.
This is a test anti-pattern. Tests should not contain fallback application logic. Instead, they should reliably wait for the application to reach the desired state. This complex workaround will be a maintenance nightmare and is a strong indicator of underlying flakiness. The test should be simplified to properly wait for the editor content to be populated by the application itself, likely by waiting for a specific network response to complete and the UI to update accordingly.
| // 调用React的onChange回调来更新组件状态 | ||
| await page.evaluate((yaml) => { | ||
|
|
||
| const findReactFiber = (element: any) => { | ||
| const keys = Object.keys(element); | ||
| return keys.find(key => key.startsWith('__reactFiber') || key.startsWith('__reactInternalInstance')); | ||
| }; | ||
|
|
||
| const monacoContainer = document.querySelector('.monaco-editor'); | ||
| if (monacoContainer) { | ||
| const fiberKey = findReactFiber(monacoContainer); | ||
| if (fiberKey) { | ||
| let fiber = (monacoContainer as any)[fiberKey]; | ||
|
|
||
| while (fiber) { | ||
| if (fiber.memoizedProps && fiber.memoizedProps.onChange) { | ||
| fiber.memoizedProps.onChange(yaml); | ||
| return; | ||
| } | ||
| fiber = fiber.return; | ||
| } | ||
| } | ||
| } | ||
|
|
||
| const dialog = document.querySelector('[role="dialog"]'); | ||
| if (dialog) { | ||
| const fiberKey = findReactFiber(dialog); | ||
| if (fiberKey) { | ||
| let fiber = (dialog as any)[fiberKey]; | ||
|
|
||
| const traverse = (node: any, depth = 0) => { | ||
| if (!node || depth > 20) return false; | ||
|
|
||
| if (node.memoizedProps && node.memoizedProps.onChange) { | ||
| node.memoizedProps.onChange(yaml); | ||
| return true; | ||
| } | ||
|
|
||
| if (node.child && traverse(node.child, depth + 1)) return true; | ||
| if (node.sibling && traverse(node.sibling, depth + 1)) return true; | ||
|
|
||
| return false; | ||
| }; | ||
|
|
||
| traverse(fiber); | ||
| } | ||
| } | ||
| }, testDeploymentYaml); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The approach used to update the Monaco editor's content and trigger a state update in React is very complex and fragile. It relies on traversing React's internal Fiber tree (__reactFiber$) to find and call the onChange prop. This is an implementation detail that can easily break with any change to the component structure or React version, leading to flaky tests.
A more robust and simpler approach would be to use Playwright's built-in methods to simulate user input, or at least a less intrusive DOM manipulation. Consider using locator.fill() or dispatching a standard input event.
// A simpler way to set editor content and trigger updates.
await page.evaluate((yaml) => {
const textarea = document.querySelector('.monaco-editor textarea') as HTMLTextAreaElement;
if (textarea) {
textarea.value = yaml;
textarea.focus();
// Dispatch an 'input' event to notify React of the change.
textarea.dispatchEvent(new Event('input', { bubbles: true }));
}
}, testDeploymentYaml);| try { | ||
| await expect(page.locator('table').locator(`text=${deploymentName}`)).toBeVisible({ | ||
| timeout: 15000 | ||
| }); | ||
| } catch { | ||
| // 如果列表中没有立即显示,可能是由于缓存或刷新延迟 | ||
| // 但API成功表示deployment已创建 | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The try...catch block here swallows potential assertion failures. If toBeVisible fails, the test will not fail but instead proceed, potentially masking bugs in the application. Assertions in tests should be direct. If the created deployment doesn't appear in the list within the timeout, the test should fail. Playwright's auto-waiting mechanism within expect is designed for this purpose.
await expect(page.locator('table').locator(`text=${deploymentName}`)).toBeVisible({
timeout: 15000
});| test.beforeEach(async ({ page }) => { | ||
| await page.goto(`${baseURL}/login`, { waitUntil: 'domcontentloaded' }); | ||
| await page.evaluate((t) => localStorage.setItem('token', t), token); | ||
| await page.goto(`${baseURL}${basePath}`, { waitUntil: 'domcontentloaded' }); | ||
| await page.evaluate((t) => localStorage.setItem('token', t), token); | ||
| await page.reload({ waitUntil: 'domcontentloaded' }); | ||
| await page.waitForSelector('text=Overview', { timeout: 30000 }); | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The beforeEach hook contains redundant steps for authentication. The token is set in localStorage twice. This can be simplified to make the setup cleaner and more efficient. Consider setting the token once before the final navigation or reload. Also, this setup logic is duplicated across multiple test files. It would be best to extract it into a shared utility function to improve maintainability.
await page.goto(`${baseURL}${basePath}`);
await page.evaluate((t) => localStorage.setItem('token', t), token);
await page.reload({ waitUntil: 'domcontentloaded' });
await page.waitForSelector('text=Overview', { timeout: 30000 });There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@gemini-code-assist could you provide any suggestion for re-using duplicate code across files
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@warjiang That's an excellent question! To address the duplication of the beforeEach setup and other common utilities across your Playwright test files, you can create a shared utility file. This approach centralizes common logic, making your tests more maintainable and easier to update.
I recommend creating a new file, for example, ui/apps/dashboard/e2e/utils/test-utils.ts, with the following content:
import { Page } from '@playwright/test';
// Global constants for E2E tests
export const HOST = process.env.HOST || 'localhost';
export const PORT = process.env.PORT || 5173;
export const baseURL = `http://${HOST}:${PORT}`;
export const basePath = '/multicloud-resource-manage';
export const token = process.env.KARMADA_TOKEN || '';
/**
* Sets up authentication and navigates to the dashboard overview page.
* This should be called in a test.beforeEach hook.
*/
export async function setupDashboardAuthentication(page: Page) {
await page.goto(`${baseURL}${basePath}`);
await page.evaluate((t) => localStorage.setItem('token', t), token);
await page.reload({ waitUntil: 'domcontentloaded' });
await page.waitForSelector('text=Overview', { timeout: 30000 });
}
/**
* Generates a unique test deployment YAML.
*/
export function generateTestDeploymentYaml() {
const timestamp = Date.now();
return `apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment-${timestamp}
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: test-app
template:
metadata:
labels:
app: test-app
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80`;
}Then, in your test files like ui/apps/dashboard/e2e/workload/deployment/deployment-create.spec.ts, you can refactor them as follows:
-
Add an import statement at the top of the file:
import { test, expect } from '@playwright/test'; import { setupDashboardAuthentication, generateTestDeploymentYaml, baseURL, basePath, token } from '../../utils/test-utils';
(Note: The
baseURL,basePath, andtokenconstants are exported fromtest-utils.tsfor use in other parts of your tests if needed, but thesetupDashboardAuthenticationfunction already uses them internally.) -
Remove the duplicated constants (lines 17-22 in
deployment-create.spec.ts). -
Remove the duplicated
generateTestDeploymentYamlfunction (lines 24-45 indeployment-create.spec.ts). -
Replace the
test.beforeEachblock with a call to the shared function. For the highlighted lines (48-55) indeployment-create.spec.ts, you can use the following suggestion:
| test.beforeEach(async ({ page }) => { | |
| await page.goto(`${baseURL}/login`, { waitUntil: 'domcontentloaded' }); | |
| await page.evaluate((t) => localStorage.setItem('token', t), token); | |
| await page.goto(`${baseURL}${basePath}`, { waitUntil: 'domcontentloaded' }); | |
| await page.evaluate((t) => localStorage.setItem('token', t), token); | |
| await page.reload({ waitUntil: 'domcontentloaded' }); | |
| await page.waitForSelector('text=Overview', { timeout: 30000 }); | |
| }); | |
| test.beforeEach(async ({ page }) => { | |
| await setupDashboardAuthentication(page); | |
| }); |
| await page.waitForSelector('[role="tooltip"]', { timeout: 10000 }); | ||
|
|
||
| // 点击Confirm按钮确认删除 | ||
| const confirmButton = page.locator('[role="tooltip"] button').filter({ hasText: /^(Confirm)$/ }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The selector for the confirm button is not localized. While the delete button selector correctly handles both Chinese (删除) and English (Delete), the confirm button only looks for Confirm. This could cause the test to fail in a localized environment. For consistency and robustness, this should also handle different languages.
| const confirmButton = page.locator('[role="tooltip"] button').filter({ hasText: /^(Confirm)$/ }); | |
| const confirmButton = page.locator('[role="tooltip"] button').filter({ hasText: /^(Confirm|确认)$/ }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the implementaion is ok currently, due to there is no need to take chinese scenario into consideration
| await page.goto(`${baseURL}/login`, { waitUntil: 'networkidle' }); | ||
| await page.evaluate((t) => localStorage.setItem('token', t), token); | ||
| await page.goto(`${baseURL}${basePath}`, { waitUntil: 'networkidle' }); | ||
| await page.evaluate((t) => localStorage.setItem('token', t), token); | ||
| await page.reload({ waitUntil: 'networkidle' }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using waitUntil: 'networkidle' is a good practice for single-page applications as it's generally more reliable than domcontentloaded. It helps ensure that dynamic content fetched via API calls has loaded. For consistency and to improve the reliability of all tests, I recommend using networkidle in the beforeEach hooks of all the new E2E test files.
| await page.waitForSelector('text=Overview', { timeout: 30000 }); | ||
| }); | ||
|
|
||
| test('should view deployment details', async ({ page }) => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test is incomplete. It navigates to the deployment details view but doesn't perform any assertions to verify that the correct information is displayed. A test without assertions doesn't validate any behavior. You should add expect statements to check for the presence of key information, such as the deployment name, namespace, or labels on the details page.
| const testDeploymentYaml = `apiVersion: apps/v1 | ||
| kind: Deployment | ||
| metadata: | ||
| name: test-deployment-for-view-${Date.now()} | ||
| namespace: default | ||
| spec: | ||
| replicas: 1 | ||
| selector: | ||
| matchLabels: | ||
| app: test-app | ||
| template: | ||
| metadata: | ||
| labels: | ||
| app: test-app | ||
| spec: | ||
| containers: | ||
| - name: nginx | ||
| image: nginx:latest | ||
| ports: | ||
| - containerPort: 80`; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test defines the test deployment YAML inline, whereas other tests like deployment-create.spec.ts use the generateTestDeploymentYaml helper function. This inconsistency makes the tests harder to maintain. For consistency, you should use the shared generateTestDeploymentYaml function here as well. This also reinforces the need to extract common helpers into a shared utility file.
|
/assign |
ui/apps/dashboard/e2e/workload/deployment/deployment-delete.spec.ts
Outdated
Show resolved
Hide resolved
| permissions: | ||
| contents: read # for actions/checkout to fetch code | ||
|
|
||
| env: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@gemini-code-assist should we use new environment variable or re-use github build environment variable like RUNNER_DEBUG or ACTIONS_RUNNER_DEBUG
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1
Gimini did not respond; let me try again.
@gemini-code-assist
should we use new environment variable or re-use github build environment variable like RUNNER_DEBUG or ACTIONS_RUNNER_DEBUG
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the follow-up question, @XiShanYongYe-Chang and @warjiang. It's an excellent point to consider the best practice for defining environment variables, especially for debugging.
The env: block in the workflow file is indeed the correct place to define environment variables that should be available to all jobs within the workflow. In this context, the DEBUG environment variable is being used within the Playwright tests (as seen in if(process.env.DEBUG === 'true') in the new test files) to control debug-specific actions like taking screenshots.
GitHub Actions provides its own set of built-in debug environment variables, such as RUNNER_DEBUG (to enable debug logging for the runner) and ACTIONS_STEP_DEBUG (to enable debug logging for specific steps). These are primarily for debugging the GitHub Actions workflow itself, not necessarily for controlling application-level debug flags.
Given that the DEBUG variable is specifically for the application's test logic, defining it as a custom environment variable within the env: block is appropriate. This keeps the application's debug flag separate from the GitHub Actions' internal debugging mechanisms.
To enable this DEBUG flag for the E2E tests, you can add it to the env: block like this:
| env: | |
| cancel-in-progress: true | |
| permissions: | |
| contents: read # for actions/checkout to fetch code | |
| env: | |
| DEBUG: 'true' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From the return of Gemini, it appears that using DEBUG is currently acceptable.
ui/apps/dashboard/e2e/workload/deployment/deployment-create.spec.ts
Outdated
Show resolved
Hide resolved
| const response = await apiRequestPromise; | ||
| expect(response.status()).toBe(200); | ||
|
|
||
| // Wait for dialog to close |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it seesm that we need to wait for the resource be created successfully, maybe we need to add more retry in the dashboard to avoid these kind of tricks.
ui/apps/dashboard/e2e/workload/deployment/deployment-create.spec.ts
Outdated
Show resolved
Hide resolved
ui/apps/dashboard/e2e/workload/deployment/deployment-create.spec.ts
Outdated
Show resolved
Hide resolved
| await page.waitForSelector('[role="tooltip"]', { timeout: 10000 }); | ||
|
|
||
| // 点击Confirm按钮确认删除 | ||
| const confirmButton = page.locator('[role="tooltip"] button').filter({ hasText: /^(Confirm)$/ }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the implementaion is ok currently, due to there is no need to take chinese scenario into consideration
ui/apps/dashboard/e2e/workload/deployment/deployment-delete.spec.ts
Outdated
Show resolved
Hide resolved
ui/apps/dashboard/e2e/workload/deployment/deployment-delete.spec.ts
Outdated
Show resolved
Hide resolved
| permissions: | ||
| contents: read # for actions/checkout to fetch code | ||
|
|
||
| env: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@gemini-code-assist should we use new environment variable or re-use github build environment variable like RUNNER_DEBUG or ACTIONS_RUNNER_DEBUG
025fa46 to
699fc12
Compare
9fecd48 to
5579dc4
Compare
| cd karmada | ||
| export CLUSTER_VERSION=kindest/node:${{ matrix.k8s }} | ||
| hack/local-up-karmada.sh | ||
| echo "KUBECONFIG=/home/runner/.kube/karmada.config" >> $GITHUB_ENV |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is $GITHUB_ENV used for debugging? I haven't seen it used anywhere else.
| if: ${{ env.DEBUG == 'true' }} | ||
| run: | | ||
| kubectl cluster-info || echo "Cluster info failed" | ||
| kubectl get nodes -o wide || echo "Get nodes failed" | ||
| kubectl get pods -A || echo "Get pods failed" | ||
| kind get clusters || echo "Kind get clusters failed" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @SunsetB612 @warjiang
I have a question regarding the DEBUG-related content. Do we need to merge it into the main branch? If we do merge it, how do we plan to utilize it subsequently?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
debug the e2e job need some time, add debug variable will make it easy to debug the pipeline if something wrong ~
| kubectl --context=karmada-host apply -k karmada-dashboard/artifacts/overlays/nodeport-mode | ||
| kubectl --context=karmada-apiserver apply -f karmada-dashboard/artifacts/dashboard/karmada-dashboard-sa.yaml | ||
| TOKEN=$(kubectl --context=karmada-apiserver -n karmada-system get secret/karmada-dashboard-secret -o go-template="{{.data.token | base64decode}}") | ||
| echo "KARMADA_TOKEN=$TOKEN" >> $GITHUB_ENV |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto
ui/apps/dashboard/e2e/workload/deployment/deployment-create.spec.ts
Outdated
Show resolved
Hide resolved
f5e7f60 to
86b416b
Compare
86b416b to
690c8b7
Compare
|
Thanks @SunsetB612 We still need to ask @warjiang to help take a review~ |
I’m really glad to hear that. I still have a lot to learn, and I really appreciate your guidance. I’ll keep working to improve and contribute more.❤️ |
|
@SunsetB612 good job, there is still one problem is that duplicate code in test-utils, I think we can lift the code to the |
after the problem be resovled, I think we can move the PR forward 🚀 |
690c8b7 to
9ac70e2
Compare
|
Signed-off-by: SunsetB612 <[email protected]>
9ac70e2 to
9134631
Compare
|
@SunsetB612 cool, thanks for your hard work |
|
/lgtm |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: warjiang The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
No description provided.