Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support pods/logs subresource streaming #254

Closed
elliottneilclark opened this issue May 12, 2023 · 5 comments · Fixed by #255
Closed

Support pods/logs subresource streaming #254

elliottneilclark opened this issue May 12, 2023 · 5 comments · Fixed by #255

Comments

@elliottneilclark
Copy link
Contributor

I want to watch and follow the logs of a pod in a gen server. I was hoping the following code would work and nothing would be needed:

    {:ok, _} =
      K8s.Client.connect(
        api_version,
        "pods/log",
        [namespace: namespace, name: name],
        tailLines: 100, follow: true
      )
      |> K8s.Client.put_conn(conn)
      |> K8s.Client.stream_to(self())

It will succeed and then fail a moment later with the following error:

** (FunctionClauseError) no function clause matching in K8s.Client.Mint.Request.map_frame/1
    (k8s 2.2.0) lib/k8s/client/mint/request.ex:131: K8s.Client.Mint.Request.map_frame({:binary, "2023-05-10 22:45:10,540 - bootstrapping - INFO - Figuring out my environment\n"})
    (elixir 1.14.4) lib/enum.ex:1658: Enum."-map/2-lists^map/1-0-"/2
    (k8s 2.2.0) lib/k8s/client/mint/http_adapter.ex:427: K8s.Client.Mint.HTTPAdapter.process_frames/3
    (stdlib 4.3.1) gen_server.erl:1123: :gen_server.try_dispatch/4
    (stdlib 4.3.1) gen_server.erl:1200: :gen_server.handle_msg/6
    (stdlib 4.3.1) proc_lib.erl:240: :proc_lib.init_p_do_apply/3

Obviously, the log message will be different. That makes sense; connect isn't expecting a stream of binary frames. So what's the best way to get pods/logs support? Should we extend connect to handle binary frames and the different query parameters? Or should a K8s.Client.logs() method exist?

@mruoss
Copy link
Collaborator

mruoss commented May 13, 2023

Hi @elliottneilclark

Nice catch. I think it's not such a big deal. When I've implemented the mint client for the connect operation, I was focusing on podExec. Each frame you get from podExec comes with a leading byte, annotating the channel (stdout, stderr, error). That's why I'm pattern matching against that first byte in K8s.Client.Mint.Request.map_frame/1.

Obviously, the binary stream sent when connecting to pods/log is not prefixed. So I think all that's missing is something like this: def map_frame({:binary, <<msg::binary>>}), do: {:stdout, msg}.

I will try to add a test and fix this weekend.

@elliottneilclark
Copy link
Contributor Author

So I think all that's missing is something like this:

Keyword.take(opts, [:stdin, :stdout, :stderr, :tty, :command, :container])

I think that this needs to accept the query params that pods/log can put forward. Here's a good list for that: https://github.com/kube-rs/kube/blob/3e7f32e7b046c5428539050674e8d2404e3680d5/kube-core/src/subresource.rs#L40

@mruoss
Copy link
Collaborator

mruoss commented May 14, 2023

I see, thanks for the hint.

I've created a PR (wip). I'm still considering splitting the pods/exec and pods/log cases for the params, though. The defaults are different and the "allowed params", too...

@elliottneilclark
Copy link
Contributor Author

I've created a PR (wip).

Thank you. I like that; it's elegant in re-using what's already there.

I'm still considering splitting the pods/exec and pods/log cases for the params, though.

That makes sense; most of the clients I could find had separate APIs for exec/connect and logging. It would also allow for a cleaner API when not trying to follow the logs (aka :get with the log subresource.)

@mruoss
Copy link
Collaborator

mruoss commented May 14, 2023

Released with 2.3.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants