Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test and Replace patch fails but works with kubectl #2095

Open
TNonet opened this issue Dec 7, 2024 · 6 comments
Open

Test and Replace patch fails but works with kubectl #2095

TNonet opened this issue Dec 7, 2024 · 6 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@TNonet
Copy link

TNonet commented Dec 7, 2024

Describe the bug

I can successfully run the following code with kubectl:

kubectl patch pods/POD_NAME -n NAMESPACE --type=json --patch='[{ "op": "test", "path": "/metadata/labels/A", "value": "true" }, { "op": "replace", "path": "/metadata/labels/A", "value": "false" }]'

And the pods label for A changes to "false" if the pod originally had the "true" label otherwise it fails with The request is invalid: the server rejected our request due to an error in our request.

However, following the patch example here results in the NodeJS client returning 422s for the code below.

const patches: {
  op: "replace" | "add" | "test";
  path: string;
  value: string;
}[] = [
  { op: "test", path: "/metadata/labels/A", value: "true" },
  { op: "replace", path: "/metadata/labels/A", value: "false" },
]

const response = await client.patchNamespacedPod(
  podName,
  namespace,
  patches,
  undefined, // pretty
  undefined, // dryRun
  undefined, // fieldManager
  undefined, // force
  {
    headers: {
      "Content-Type": PatchUtils.PATCH_FORMAT_JSON_PATCH,
    },
  },
);

If I remove either of the patches, it works either testing correctly or patching, but I can't seem to get both to run without a 422.

** Client Version **
0.15.0
I also tried with 0.22.3 and it also doesn't work

** Server Version **

>>> kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short.  Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"27+", GitVersion:"v1.27.8-dispatcher", GitCommit:"0153a2044a32f6ee6844207ec1d2de34710d875b", GitTreeState:"clean", BuildDate:"2023-11-28T19:02:53Z", GoVersion:"go1.20.11", Compiler:"gc", Platform:"darwin/arm64"}
Kustomize Version: v5.0.1
Server Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.3", GitCommit:"25b4e43193bcda6c7328a6d147b1fb73a33f1598", GitTreeState:"clean", BuildDate:"2023-06-15T00:38:14Z", GoVersion:"go1.20.5", Compiler:"gc", Platform:"linux/arm64"}

To Reproduce
See description of bug

Expected behavior
I would expect a valid combination of test + replace + add to work similar to a kubectl version of the command

** Example Code**
Code snippet for what you are doing

Environment (please complete the following information):

  • OS: MacOS 14.4 (23E214)
  • NodeJS Version: v20.15.0
  • Cloud runtime: NA

Additional context
I am running kind (kind version 0.20.0) locally instead of interacting with a cloud k8s cluster. I created it with kind create cluster --image=kindest/node:v1.27.3

@brendandburns
Copy link
Contributor

What message does the 422 error return? Hopefully there is some more info in there about why it doesn't like it.

I would also try adding --v=10 to the kubectl command line, that will print the actual body that kubectl is sending, it's possible we can use that to identify any differences between what kubectl is sending and what we're sending.

@TNonet
Copy link
Author

TNonet commented Dec 8, 2024

Ok so --v=10 is pretty verbose. I tried to redact as much as I could in this github gist. Let me know if I went to far

Looking at the Node client source code I added a console error here before the Node Client begins the request:

{
  localVarRequestOptions: {
    method: "PATCH",
    qs: {},
    headers: {
      Accept: "application/json",
      "Content-Type": "application/json-patch+json",
    },
    uri: "https://127.0.0.1:65061/api/v1/namespaces/NAMESPACE/pods/POD_NAME",
    useQuerystring: false,
    json: true,
    body: [
      { op: "test", path: "/metadata/labels/A", value: "true" },
      { op: "replace", path: "/metadata/labels/A", value: "false" },
    ],
  },
}

If I stringily the error that is thrown I get:

{
  response: {
    statusCode: 422,
    body: {
      kind: "Status",
      apiVersion: "v1",
      metadata: {},
      status: "Failure",
      message:
        "the server rejected our request due to an error in our request",
      reason: "Invalid",
      details: {},
      code: 422,
    },
    headers: {
      "audit-id": "d3074905-28a8-436c-8a3e-fa7251e2d36f",
      "cache-control": "no-cache, private",
      "content-type": "application/json",
      "x-kubernetes-pf-flowschema-uid":
        "98385574-9e1b-41c8-83fb-cf2968d4fd62",
      "x-kubernetes-pf-prioritylevel-uid":
        "e4086ae8-e9e4-43b1-95bd-28435bc52785",
      date: "Sun, 08 Dec 2024 17:04:17 GMT",
      "content-length": "187",
      connection: "close",
    },
    request: {
      uri: {
        protocol: "https:",
        slashes: true,
        auth: null,
        host: "127.0.0.1:65061",
        port: "65061",
        hostname: "127.0.0.1",
        hash: null,
        search: null,
        query: null,
        pathname:
          "/api/v1/namespaces/hex-dev-local/pods/hex-dev-local-python-kernel-prewarmed-kernels-python39-smad7gms",
        path: "/api/v1/namespaces/hex-dev-local/pods/hex-dev-local-python-kernel-prewarmed-kernels-python39-smad7gms",
        href: "https://127.0.0.1:65061/api/v1/namespaces/hex-dev-local/pods/hex-dev-local-python-kernel-prewarmed-kernels-python39-smad7gms",
      },
      method: "PATCH",
      headers: {
        Accept: "application/json",
        "Content-Type": "application/json-patch+json",
        "content-length": 135,
      },
    },
  },
  body: { apiVersion: "v1", kind: "Status", metadata: {}, status: {} },
  statusCode: 422,
  name: "HttpError",
}

@brendandburns
Copy link
Contributor

So the only thing that I notice is that there is a trailing comma in your list of patches. I wouldn't think that would cause problems, but maybe?

It's also sending a fieldManager query parameter (fieldManager=kubectl-patch) which the client isn't so maybe that's it?

@TNonet
Copy link
Author

TNonet commented Dec 9, 2024

The trailing comma was added by my formatter and does not appear in the actual request parameters. I added kubectl-patch to the fieldmanager and it also doesn't work.

Going to continue poking at this and see whats up

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 9, 2025
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 8, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

4 participants