Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

new flag --listen-apiserver-port for docker/podman: allow user to specify host port for api-server #11070

Closed
wants to merge 3 commits into from

Conversation

zhan9san
Copy link
Contributor

Hi,
This change will have apiserver exposed on a fixed port, even though minikube is restarted.
Would it be possible to review and approve this change?

Fixes #11041
Thanks

@k8s-ci-robot
Copy link
Contributor

Thanks for your pull request. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

📝 Please follow instructions at https://git.k8s.io/community/CLA.md#the-contributor-license-agreement to sign the CLA.

It may take a couple minutes for the CLA signature to be fully registered; after that, please reply here with a new comment and we'll verify. Thanks.


  • If you've already signed a CLA, it's possible we don't have your GitHub username or you're using a different email address. Check your existing CLA data and verify that your email is set on your git commits.
  • If you signed the CLA as a corporation, please sign in with your organization's credentials at https://identity.linuxfoundation.org/projects/cncf to be authorized.
  • If you have done the above and are still having issues with the CLA being reported as unsigned, please log a ticket with the Linux Foundation Helpdesk: https://support.linuxfoundation.org/
  • Should you encounter any issues with the Linux Foundation Helpdesk, send a message to the backup e-mail support address at: [email protected]

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@k8s-ci-robot k8s-ci-robot added the cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. label Apr 12, 2021
@k8s-ci-robot
Copy link
Contributor

Hi @zhan9san. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Apr 12, 2021
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: zhan9san
To complete the pull request process, please assign ilya-zuyev after the PR has been reviewed.
You can assign the PR to them by writing /assign @ilya-zuyev in a comment when ready.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Apr 12, 2021
@minikube-bot
Copy link
Collaborator

Can one of the admins verify this patch?

@zhan9san
Copy link
Contributor Author

Hi,
I signed it.
The email for CNCF is the same as what I use in Github Commit message.
Could you help check CLA issue?

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. and removed cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. labels Apr 12, 2021
@medyagh
Copy link
Member

medyagh commented Apr 12, 2021

@zhan9san how would this work with multiple clusters ? wouldn't they conflict if they have two clusters on same machine?

publish := fmt.Sprintf("--publish=%s::%d", pm.ListenAddress, pm.ContainerPort)
// example --publish=127.0.0.17:8443:8443 will get a fixed host port for 8443
publish := ""
if pm.HostPort != 0 {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how about add a Check here to verify this port is Free and Usable and if it is not Free and usable return an error to the user.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's OK.
I'll add this validation based on what docker does.

@medyagh medyagh changed the title Add listen-apiserver-port for docker driver docker/podman: allow user to specify host port for api-server listen-apiserver-port Apr 12, 2021
@medyagh medyagh changed the title docker/podman: allow user to specify host port for api-server listen-apiserver-port new flag --listen-apiserver-port for docker/podman: allow user to specify host port for api-server Apr 12, 2021
@medyagh
Copy link
Member

medyagh commented Apr 19, 2021

@zhan9san are you still working on this ?

@zhan9san
Copy link
Contributor Author

Hi @medyagh
Thanks for your time.
The validation is added in outer layer instead of driver layer.
I find the implementation of PortAllocator in moby is too complex.

For simplicity, I choose net.Listen to verify whether the port is free and usable.

Please let me know what you think.

if err != nil {
exit.Message(reason.Usage, "Sorry, the port provided with the --listen-apiserver-port flag is already allocated: {{.listenAPIServerPort}}.", out.V{"listenAPIServerPort": listenAPIServerPort})
}
err = ln.Close()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesn’t this need to be defer ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for pointing it out.

The code snippet above is replaced by defer.

By introducing defer statements we can ensure that the socket is always closed.

@medyagh
Copy link
Member

medyagh commented Apr 25, 2021

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Apr 25, 2021
@minikube-pr-bot
Copy link

kvm2 driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 11070) |
+----------------+----------+---------------------+
| minikube start | 48.5s    | 48.2s               |
| enable ingress | 35.3s    | 34.7s               |
+----------------+----------+---------------------+

Times for minikube start: 48.1s 49.2s 47.3s 46.5s 51.7s
Times for minikube (PR 11070) start: 50.9s 47.3s 48.1s 48.1s 46.7s

Times for minikube ingress: 34.3s 34.3s 35.2s 38.2s 34.2s
Times for minikube (PR 11070) ingress: 34.3s 34.9s 33.8s 35.7s 34.8s

docker driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 11070) |
+----------------+----------+---------------------+
| minikube start | 21.7s    | 21.6s               |
| enable ingress | 31.2s    | 29.7s               |
+----------------+----------+---------------------+

Times for minikube start: 22.9s 21.1s 21.6s 21.3s 21.7s
Times for minikube (PR 11070) start: 21.5s 21.7s 21.8s 22.3s 20.8s

Times for minikube ingress: 28.5s 28.5s 33.5s 28.5s 37.0s
Times for minikube (PR 11070) ingress: 28.5s 30.0s 31.5s 30.0s 28.5s

docker driver with containerd runtime
error collecting results for docker driver: timing run 0 with minikube: timing cmd: [out/minikube addons enable ingress]: waiting for minikube: exit status 10

@minikube-pr-bot
Copy link

kvm2 driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 11070) |
+----------------+----------+---------------------+
| minikube start | 48.9s    | 50.3s               |
| enable ingress | 35.0s    | 36.5s               |
+----------------+----------+---------------------+

Times for minikube start: 48.0s 46.8s 48.7s 50.3s 51.0s
Times for minikube (PR 11070) start: 51.9s 47.3s 48.4s 56.2s 47.5s

Times for minikube ingress: 35.2s 35.3s 34.8s 34.4s 35.2s
Times for minikube (PR 11070) ingress: 35.7s 34.3s 42.3s 36.3s 33.7s

docker driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 11070) |
+----------------+----------+---------------------+
| minikube start | 21.9s    | 21.6s               |
| enable ingress | 30.7s    | 30.4s               |
+----------------+----------+---------------------+

Times for minikube ingress: 30.0s 29.5s 36.5s 28.0s 29.5s
Times for minikube (PR 11070) ingress: 27.5s 28.0s 32.0s 32.0s 32.5s

Times for minikube start: 22.7s 21.5s 21.0s 22.7s 21.7s
Times for minikube (PR 11070) start: 21.4s 21.2s 22.1s 21.6s 21.6s

docker driver with containerd runtime
error collecting results for docker driver: timing run 0 with minikube: timing cmd: [out/minikube addons enable ingress]: waiting for minikube: exit status 10

@zhan9san
Copy link
Contributor Author

Hi @medyagh
Please hold on.

I'd like to create another PR about FAQ document instead of adding redundant logic in source code.

#11193 and #9404 bring me back to considering whether #11070 is necessary.

Please let me know what you think.

Here is the test of --ports=8443:8443

  1. On remote Ubuntu server, create a k8s cluster
x@x-v:~$ minikube start --listen-address=0.0.0.0 --apiserver-ips=172.28.24.96 --ports=8443:8443
😄  minikube v1.19.0 on Ubuntu 20.04
✨  Using the docker driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
    > gcr.io/k8s-minikube/kicbase...: 357.67 MiB / 357.67 MiB  100.00% 2.30 MiB
🔥  Creating docker container (CPUs=2, Memory=2200MB) ...
💡  minikube is not meant for production use. You are opening non-local traffic
❗  Listening to 0.0.0.0. This is not recommended and can cause a security vulnerability. Use at your own risk
🐳  Preparing Kubernetes v1.20.2 on Docker 20.10.5 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
  1. On localhost, copy ~/.kube/config, certificate-authority, client-certificate and client-key from remote server to corresponding local directory
  2. On localhost, change the apiserver from default value(https://192.168.49.2:8443) to specific one(https://172.28.24.96:8443).
  3. On localhost, change the location of certificate-authority, client-certificate and client-key in ~/.kube/config to corresponding local paths.
  4. On localhost, run test
~/Source/k8s-test/svc ❯  kubectl apply -f test.yml
deployment.apps/my-deployment created
service/my-np-service created
~/Source/k8s-test/svc 5m 50s ❯ kubectl get po                                                                                                        13:15:11
NAME                             READY   STATUS    RESTARTS   AGE
my-deployment-6ccc959b54-nb8p2   1/1     Running   0          11m
~/Source/k8s-test/svc ❯ kubectl get svc                                                                                                              13:16:23
NAME            TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
kubernetes      ClusterIP   10.96.0.1      <none>        443/TCP          16m
my-np-service   NodePort    10.96.152.82   <none>        8080:30080/TCP   11m

test.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-deployment
spec:
  selector:
    matchLabels:
      app: metrics
      department: engineering
  replicas: 1
  template:
    metadata:
      labels:
        app: metrics
        department: engineering
    spec:
      containers:
      - name: hello
        image: "gcr.io/google-samples/hello-app:2.0"
        env:
        - name: "PORT"
          value: "8080"

---
apiVersion: v1
kind: Service
metadata:
  name: my-np-service
spec:
  type: NodePort
  selector:
    app: metrics
    department: engineering
  ports:
  - protocol: TCP
    port: 8080
    nodePort: 30080

if err != nil {
exit.Message(reason.Usage, "Sorry, the port provided with the --listen-apiserver-port flag is already allocated: {{.listenAPIServerPort}}.", out.V{"listenAPIServerPort": listenAPIServerPort})
}
defer ln.Close()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this defer needs to happen before Exit

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @medyagh
Thanks so much for your time.

If it goes into exit.Message, that means the socket wouldn't be created. Do we still need to modify this snippet?
You're far more familiar to Golang that me. I referred the logic in this file, example_test.go, at line 23.

Besides, I noticed #9404 has achieved this feature. The only drawback is that #9404 exposes one more random port.
Should we still work on this feature, --listen-apiserver-port?

Please let me know what you think.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zhan9san You are correct about defer, if an error is returned no action is required on the socket.

@k8s-ci-robot
Copy link
Contributor

@zhan9san: The following test failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
pull-minikube-build 19dacad link /test pull-minikube-build

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@zhan9san
Copy link
Contributor Author

It seems there are some issues in CI workflow.

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jun 28, 2021
@k8s-ci-robot
Copy link
Contributor

@zhan9san: PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@marwatk
Copy link
Contributor

marwatk commented Sep 8, 2021

Is anyone still working on this? If not I can try to drive it forward.

We configure our developer machines via ansible and the dynamic k8s port means we have to do a lookup instead of using a fixed value. Not the end of the world, but would be much easier to use a fixed port.

@zhan9san
Copy link
Contributor Author

zhan9san commented Sep 9, 2021

Hi @marwatk

Could you please verify does this workaround satisfy your requirement?
#11070 (comment)

@marwatk
Copy link
Contributor

marwatk commented Sep 9, 2021

@zhan9san, thanks for the suggestion. I tried that first and while it works for communication it still puts the random port in ~/.kube/config. Our (current) workflow needs them to match as our ansible installs and configures minikube (where the lookup is currently needed) and post-install tooling relies on ~/.kube/config as a source of truth.

@zhan9san
Copy link
Contributor Author

zhan9san commented Sep 9, 2021

Hi @marwatk

Could you please provide the minikube version and kubeconfig?

I tried again(v1.22.0), it does show the fixed port, but internal IP address in apiserver.

As client-certificate, client-key and certificate-authority need substituting, you can use Ansible template or yq to generate an available kubeconfig file.

@marwatk
Copy link
Contributor

marwatk commented Sep 9, 2021

@zhan9san, I'm on 1.22:

$ minikube start \
>     --driver=docker \
>     --embed-certs \
>     --ports=80:80 \
>     --ports=443:443 \
>     --ports=42376:2376 \
>     --ports=48443:8443
😄  minikube v1.22.0 on Ubuntu 18.04
✨  Using the docker driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
🔥  Creating docker container (CPUs=2, Memory=12800MB) ...
❗  This container is having trouble accessing https://k8s.gcr.io
💡  To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
🐳  Preparing Kubernetes v1.21.2 on Docker 20.10.7 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

$ cat ~/.kube/config
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: <SNIP>
    extensions:
    - extension:
        last-update: Thu, 09 Sep 2021 12:34:39 MDT
        provider: minikube.sigs.k8s.io
        version: v1.22.0
      name: cluster_info
    server: https://127.0.0.1:49154 <---note the 49154 instead of 48443 here
  name: minikube
<SNIP>

@zhan9san
Copy link
Contributor Author

Hi @marwatk

I run the exactly the same command with you, but still cannot reproduce your issue.

Could you please rename/move current kubeconfig and re-run minikube start xxx?

Once it is done, run docker port minikube, and show me the output

$ docker port minikube
8443/tcp -> 0.0.0.0:48443
8443/tcp -> :::48443
8443/tcp -> 127.0.0.1:49313
22/tcp -> 127.0.0.1:49317
2376/tcp -> 127.0.0.1:49316
2376/tcp -> 0.0.0.0:42376
2376/tcp -> :::42376
32443/tcp -> 127.0.0.1:49314
443/tcp -> 0.0.0.0:443
443/tcp -> :::443
5000/tcp -> 127.0.0.1:49315
80/tcp -> 0.0.0.0:80
80/tcp -> :::80

@marwatk
Copy link
Contributor

marwatk commented Sep 10, 2021

Sure enough, deleting my ~./kube/config prior to starting minikube results in the proper port. Super weird because minikube delete removes that cluster from the config. And now I can't duplicate it even with an existing. Wondering if it has to do with golang randomly sorting maps and it just picking the first one it gets, but I ran a few in a loop and never saw the behavior I was seeing. 🤷

At any rate, we're moving to --driver=native for our use case, so this is going to be a non-issue for us.

Thanks for your followup!

@zhan9san zhan9san closed this Oct 30, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

apiserver port is exposed to a random port
7 participants