Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Publish additional ports and IPs when running minikube start #9198

Closed
arielmoraes opened this issue Sep 7, 2020 · 9 comments
Closed

Publish additional ports and IPs when running minikube start #9198

arielmoraes opened this issue Sep 7, 2020 · 9 comments
Labels
kind/documentation Categorizes issue or PR as related to documentation. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.

Comments

@arielmoraes
Copy link

When minikube start command runs using the Docker driver it tells Docker to publish some ports, but it binds to the localhost ip only, thus not giving a chance to access the cluster from another machine using the kubectl for example. Another issue is if one wants to access a Service via an Ingress, in that case we need the same solution, which is to add some entries to the iptables as following:

iptables -I DOCKER-USER 1 ! -i docker0 -o docker0 -p tcp -j ACCEPT -d $(minikube ip) --dport 443
iptables -t nat -A DOCKER ! -i docker0 -p tcp -j DNAT --dport 443 --to-destination $(minikube ip):443

I could also execute those command to bind the 8443 port to 0.0.0.0 to use the kubectl outside the local minikube machine.

Does minikube provide a better way of doing that or is that an inexistent feature?

@afbjorklund
Copy link
Collaborator

afbjorklund commented Sep 8, 2020

When minikube start command runs using the Docker driver it tells Docker to publish some ports, but it binds to the localhost ip only, thus not giving a chance to access the cluster from another machine using the kubectl for example.

This is by design. The minikube cluster is not supposed to be available "outside" the local developer machine, currently the recommended approach is to ssh into the minikube host if wanting remote access. This goes for all the drivers, not only docker.

There actually two different issues here, the first is accessing the apiserver remotely (for some reason). Typically kubectl proxy. The other is making a deployed application available remotely, typically kubectl port-forward (or perhaps a proper ingress)

This probably need some better explaining in documentation.

Because it is a recurring question (and perhaps expectation)

Similar to: #8008 #8398

Requesting people to set up tunnels does not improve security.

@afbjorklund afbjorklund added kind/feature Categorizes issue or PR as related to a new feature. triage/discuss Items for discussion kind/documentation Categorizes issue or PR as related to documentation. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. labels Sep 8, 2020
@afbjorklund
Copy link
Collaborator

afbjorklund commented Sep 8, 2020

The currently exposed ports are:

  • 22 for ssh (tunneled)
  • 2376 for old docker
  • 8443 for apiserver
  • 5000 for registry hack

There is a pending request to make it possible for the user to add more (#7332), but that doesn't really answer the question...

We probably don't need to expose docker anymore, and the registry port is just to get around the TLS certificate requirement.

  1. DOCKER_HOST=ssh://USER@HOST:PORT
  2. https://docs.docker.com/registry/deploying/

Currently we use ssh through a local tunnel, but it is also listening on the node directly. Maybe should be on loopback only ?

@arielmoraes
Copy link
Author

I think the main reason that is asked a lot is because minikube is a very easy way of starting a new k8s cluster. So some people will want to use it as a "home cluster" which gives the possibility to host some small applications, like an DLNA server etc. Thus going beyond of just a developer cluster.

Now for the design part, I think it would be nice to leave the decision of which ports to expose to the user, that would close all those issues, wouldn't it? 😄

@afbjorklund
Copy link
Collaborator

afbjorklund commented Sep 8, 2020

I think the main reason that is asked a lot is because minikube is a very easy way of starting a new k8s cluster.

One could also view this as saying that kubeadm is too complicated, so people turn to minikube for some help ?

It would be nice to cooperate a bit more, so that there are available alternatives for both development and deployment.
There should be an easy path to follow, how to start with containers and images and move on to orchestration and clusters.
But it also quickly becomes associated with particular vendors and cloud providers. And it is hard to please multiple audiences.
Both running kubeadm locally (with the none driver) and exposing it externally are popular options, whether we "like it" or not...

@tstromberg
Copy link
Contributor

Discussed with team, it seems like we would be willing to accept a PR that introduces this functionality.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 15, 2021
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 14, 2021
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/documentation Categorizes issue or PR as related to documentation. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.
Projects
None yet
Development

No branches or pull requests

5 participants