Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

arguments: support customizing minikube profile name #59

Closed
jgehrcke opened this issue Feb 7, 2023 · 8 comments
Closed

arguments: support customizing minikube profile name #59

jgehrcke opened this issue Feb 7, 2023 · 8 comments

Comments

@jgehrcke
Copy link

jgehrcke commented Feb 7, 2023

For symmetry between a local workflow and the 'same' workflow in CI it makes sense to use a custom minikube profile name.

I'd love to start minikube in GHA with a specific profile name (e.g. foobar) so that later on in a script I can do

minikube status --profile foobar`

The default profile name is "minikube" btw.

@spowelljr
Copy link
Collaborator

Thanks for the suggestion @jgehrcke, I agree that's a good feature to add. In the meantime you can accomplish this using the start-args option.

uses: medyagh/setup-minikube@latest
with:
  start-args: '--profile foobar'

@jgehrcke
Copy link
Author

Thank you! I didn't think of this.

jgehrcke added a commit to conbench/conbench that referenced this issue Feb 10, 2023
Got feedback on
medyagh/setup-minikube#59

Helps to make things simpler.
@jgehrcke
Copy link
Author

jgehrcke commented Feb 10, 2023

Trying this required me to leave @latest (start-args not supported there) and instead use head of master. However, that seems to not start up:

log
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.

Unfortunately, an error has occurred:
	timed out waiting for the condition

This error is likely caused by:
	- The kubelet is not running
	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	- 'systemctl status kubelet'
	- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'

stderr:
W0210 13:42:04.579208    [273](https://github.com/conbench/conbench/actions/runs/4144416276/jobs/7167437381#step:3:274)1 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1031-azure\n", err: exit status 1
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Related issue: https://github.com/kubernetes/minikube/issues/4172

Error: The process '/home/runner/bin/minikube' failed with exit code 109

@jgehrcke
Copy link
Author

jgehrcke commented Feb 10, 2023

Trying this required me to leave @latest

To clarify:

Run medyagh/setup-minikube@latest

Warning: Unexpected input(s) 'start-args', 'extra-config', 'cpus', 'memory', valid inputs are ['minikube-version', 'driver', 'container-runtime', 'kubernetes-version']

Is this a noop warning, or is this an error?

Edit: start-args was actually not taken into account:

Warning: Unexpected input(s) 'start-args', 'extra-config', 'cpus', 'memory', valid inputs are ['minikube-version', 'driver', 'container-runtime', 'kubernetes-version']
Run medyagh/setup-minikube@latest
/usr/bin/chmod +x /home/runner/work/_temp/58288c26-232e-41c0-a67e-06bd45ac9f73
/home/runner/bin/minikube start --wait all --kubernetes-version v1.24.10

jgehrcke added a commit to conbench/conbench that referenced this issue Feb 10, 2023
Got feedback on
medyagh/setup-minikube#59

Helps to make things simpler.
@jgehrcke
Copy link
Author

However, that seems to not start up:

I did read the lot output more closely. It said

try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start

Which I then did via:

        uses: medyagh/setup-minikube@e1ee887a96c50e34066a4bf9f172eb94ae69d454
        with:
          kubernetes-version: v1.24.10
          start-args: '--profile mk-conbench --extra-config=kubelet.cgroup-driver=systemd'
          cpus: max
          memory: max

That worked!

jgehrcke added a commit to conbench/conbench that referenced this issue Feb 10, 2023
* ci: simplify: common minikube profile

Got feedback on
medyagh/setup-minikube#59

Helps to make things simpler.

* ci: do not crash when `minikube status` errors out

* gha: use newer version of medyagh/setup-minikube

The latest release does error out:

Warning: Unexpected input(s) 'start-args'

I did not want to resort to using `master`
because we don't want that kind of a moving
target I think.

* gha: try to fix minikube start

err msg said:
try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
@spowelljr
Copy link
Collaborator

Glad you got it working @jgehrcke! Thanks for pointing the issue of latest out, it seems @latest is pointing to an old version of setup-minikube, we'll investigate this.

@spowelljr
Copy link
Collaborator

The latest tag was pointing to v0.0.6 I've updated it to v0.0.11 so you should be able to switch to latest now if you desire.

@medyagh
Copy link
Owner

medyagh commented Jun 16, 2023

thanks both @spowelljr and @jgehrcke to make this issue documented and solved

@medyagh medyagh closed this as completed Jun 16, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants