Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Not valid JSON in response #215

Merged
merged 5 commits into from
Jan 19, 2023
Merged

Not valid JSON in response #215

merged 5 commits into from
Jan 19, 2023

Conversation

mruoss
Copy link
Collaborator

@mruoss mruoss commented Jan 18, 2023

Some weird behaviour was reported here: #48 (comment)

Adding a reliability test to verify.

Requirements for all pull requests

  • Entry in CHANGELOG.md was created

Additional requirements for new features

  • New unit tests were created
  • New integration tests were created
  • PR description contains a link to the according kubernetes documentation

@mruoss mruoss mentioned this pull request Jan 18, 2023
@mruoss
Copy link
Collaborator Author

mruoss commented Jan 18, 2023

@arathunku

I've added a new test but I can't get it to fail locally... let's see on CI.

@arathunku
Copy link

arathunku commented Jan 18, 2023

@mruoss that's quick! On CI, is this using ssl, serviceaccount and token auth too? Just in case it doesn't fail, I'm thinking what else might be different.

@mruoss
Copy link
Collaborator Author

mruoss commented Jan 18, 2023

On CI, is this using ssl, serviceaccount and token auth too?

Okay I see... I don't think it does...

@arathunku
Copy link

arathunku commented Jan 18, 2023

My Conn looks exactly like below (some data stubbed)

{:ok,
 %K8s.Conn{
   cluster_name: nil,
   user_name: nil,
   url: "https://kubernetes.default.svc",
   insecure_skip_tls_verify: false,
   ca_cert: <<48, .....>>,
   auth: K8s.Conn.Auth.Token<...>,
   middleware: %K8s.Middleware.Stack{
     request: [K8s.Middleware.Request.Initialize,
      K8s.Middleware.Request.EncodeBody],
     response: []
   },
   discovery_driver: K8s.Discovery.Driver.HTTP,
   discovery_opts: [],
   http_provider: SpotUpkill.K8s.HTTPProvider, # switched, working
   cacertfile: "/usr/src/app/_build/prod/lib/castore/priv/cacerts.pem"
 }}

@mruoss
Copy link
Collaborator Author

mruoss commented Jan 18, 2023

ok that was not it. same result locally with token based auth:

%K8s.Conn{
  cluster_name: "k3d-k8s-ex",
  user_name: "k3d-k8s-ex-admin",
  url: "https://0.0.0.0:53225",
  insecure_skip_tls_verify: true,
  ca_cert: <<48, ...>>,
  auth: K8s.Conn.Auth.Token<...>,
  middleware: %K8s.Middleware.Stack{
    request: [K8s.Middleware.Request.Initialize,
     K8s.Middleware.Request.EncodeBody],
    response: []
  },
  discovery_driver: K8s.Discovery.Driver.HTTP,
  discovery_opts: [],
  http_provider: K8s.Client.MintHTTPProvider,
  cacertfile: "/etc/ssl/cert.pem"
}

@mruoss
Copy link
Collaborator Author

mruoss commented Jan 18, 2023

What's your cluster version?

@mruoss
Copy link
Collaborator Author

mruoss commented Jan 18, 2023

Okay, I was able to reproduce the faulty behaviour on a newer cluster (v1.25.2). I actually found a bug in my implementation. @arathunku can you test again with this branch?

@mruoss mruoss changed the title add reliability test for concurrent tests Not valid JSON in response Jan 18, 2023
@arathunku
Copy link

arathunku commented Jan 19, 2023

It's exactly: v1.24.7 @ EKS

I've tested the branch, it works! No more errors.

iex> (0..50) |> Enum.map(fn _ -> SpotUpkill.K8s.list_spot_nodes() end) |> Enum.all?(&match?({:ok, %{}}, &1))
true

Absolutely amazing @mruoss, thank you

@mruoss mruoss merged commit ab559fd into develop Jan 19, 2023
@mruoss
Copy link
Collaborator Author

mruoss commented Jan 19, 2023

Thanks again for your help!

@mruoss mruoss deleted the reliability branch December 25, 2023 12:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants