Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

apiserver port is exposed to a random port #11041

Closed
zhan9san opened this issue Apr 9, 2021 · 7 comments
Closed

apiserver port is exposed to a random port #11041

zhan9san opened this issue Apr 9, 2021 · 7 comments
Labels
co/docker-driver Issues related to kubernetes in container kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.

Comments

@zhan9san
Copy link
Contributor

zhan9san commented Apr 9, 2021

Hi,
Currently, the exposed port will be changed once minikube is restarted.

If apiserver is exposed on a fixed port, it would be convenient to access this cluster remotely, that's to say, there is no need to modify the cluster information in kubeconfig file once minikube is restarted.

Steps to reproduce the issue:

  1. $ minikube start --driver=docker --listen-address='0.0.0.0' --apiserver-ips=172.28.24.96
  2. $ docker ps
    Get the exposed apiserver port. It's "0.0.0.0:32823->8443/tcp".
  3. $ minikube stop
  4. $ minikube start
  5. $ docker ps
    Get the exposed apiserver port. It's "0.0.0.0:32828->8443/tcp".

Full output of minikube start command used, if not already included:

x@x-v:~/src/minikube$ ./out/minikube start --driver=docker --listen-address='0.0.0.0' --apiserver-ips=172.28.24.96 
😄  minikube v1.19.0-beta.0 on Ubuntu 20.04
✨  Using the docker driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🔥  Creating docker container (CPUs=2, Memory=2200MB) ...
💡  minikube is not meant for production use. You are opening non-local traffic
❗  Listening to 0.0.0.0. This is not recommended and can cause a security vulnerability. Use at your own risk
🐳  Preparing Kubernetes v1.20.2 on Docker 20.10.5 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
x@x-v:~/src/minikube$ docker ps
CONTAINER ID        IMAGE                                 COMMAND                  CREATED              STATUS              PORTS                                                                                                                        NAMES
1ff24847a905        gcr.io/k8s-minikube/kicbase:v0.0.19   "/usr/local/bin/entr…"   About a minute ago   Up About a minute   0.0.0.0:32826->22/tcp, 0.0.0.0:32825->2376/tcp, 0.0.0.0:32824->5000/tcp, 0.0.0.0:32823->8443/tcp, 0.0.0.0:32822->32443/tcp   minikube
x@x-v:~/src/minikube$ minikube stop
✋  Stopping node "minikube"  ...
🛑  Powering off "minikube" via SSH ...
🛑  1 nodes stopped.
x@x-v:~/src/minikube$ minikube start
😄  minikube v1.18.1 on Ubuntu 20.04
✨  Using the docker driver based on existing profile
👍  Starting control plane node minikube in cluster minikube
🔄  Restarting existing docker container for "minikube" ...
🐳  Preparing Kubernetes v1.20.2 on Docker 20.10.5 ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v4
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
x@x-v:~/src/minikube$ docker ps
CONTAINER ID        IMAGE                                 COMMAND                  CREATED             STATUS              PORTS                                                                                                                        NAMES
1ff24847a905        gcr.io/k8s-minikube/kicbase:v0.0.19   "/usr/local/bin/entr…"   3 minutes ago       Up About a minute   0.0.0.0:32831->22/tcp, 0.0.0.0:32830->2376/tcp, 0.0.0.0:32829->5000/tcp, 0.0.0.0:32828->8443/tcp, 0.0.0.0:32827->32443/tcp   minikube
zhan9san pushed a commit to zhan9san/minikube that referenced this issue Apr 9, 2021
zhan9san added a commit to zhan9san/minikube that referenced this issue Apr 9, 2021
zhan9san added a commit to zhan9san/minikube that referenced this issue Apr 9, 2021
zhan9san added a commit to zhan9san/minikube that referenced this issue Apr 9, 2021
@zhan9san
Copy link
Contributor Author

Here is a real scenario.

Inspired by this PR10653, I create a kubernetes cluster on a remote development server. A context is set on my local laptop, referring to configure-access-multiple-clusters.

If the minikube is restarted, I have to re-configure the remote apiserver port.

The related PR11042 would fix this issue.

zhan9san added a commit to zhan9san/minikube that referenced this issue Apr 11, 2021
@afbjorklund afbjorklund added kind/feature Categorizes issue or PR as related to a new feature. co/docker-driver Issues related to kubernetes in container labels Apr 11, 2021
@afbjorklund
Copy link
Collaborator

@medyagh : can you handle this one ? I'm not sure that exposing the cluster publically is a good idea, but since you are doing it already with --listen-address then I don't think --listen-port makes it much worse.... Random port is only a false security and a portscan away, when not binding to localhost only (like we do with the proxy to the minikube dashboard for instance).

@zhan9san
Copy link
Contributor Author

Hi @afbjorklund
Thanks for your attention.
The cluster would be exposed publically only if the '--listen-address' and '--listen-port' are passed explicitly. It provides more options for different users.

The start command, 'minikube start', still keeps safety operations as before.

@ilya-zuyev ilya-zuyev added the priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. label Apr 14, 2021
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 13, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 12, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/docker-driver Issues related to kubernetes in container kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
6 participants