Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Minikube docker driver: customize exposed ports #8398

Closed
dsebastien opened this issue Jun 6, 2020 · 16 comments
Closed

Minikube docker driver: customize exposed ports #8398

dsebastien opened this issue Jun 6, 2020 · 16 comments
Labels
co/docker-driver Issues related to kubernetes in container help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. os/linux priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.

Comments

@dsebastien
Copy link

dsebastien commented Jun 6, 2020

I'd like to know if there's currently a way to customize the ports that are exposed by the minikube container when running on Linux with the Docker driver?

Previously, I was using the virtualbox driver and used the following to expose ports I needed to access:

  vboxmanage controlvm "minikube" natpf1 "didowi-web,tcp,127.0.0.1,30000,,30000"
  vboxmanage controlvm "minikube" natpf1 "didowi-web-classic,tcp,127.0.0.1,4200,,30000"
  vboxmanage controlvm "minikube" natpf1 "didowi-database,tcp,127.0.0.1,30300,,30300"
  vboxmanage controlvm "minikube" natpf1 "didowi-database-classic,tcp,127.0.0.1,5984,,30300"
  vboxmanage controlvm "minikube" natpf1 "didowi-gate,tcp,127.0.0.1,30100,,30100"
  vboxmanage controlvm "minikube" natpf1 "didowi-gate-debug,tcp,127.0.0.1,30101,,30101"
  vboxmanage controlvm "minikube" natpf1 "didowi-cli-debug,tcp,127.0.0.1,30200,,30200"
  vboxmanage controlvm "minikube" natpf1 "didowi-ingress-http,tcp,127.0.0.1,30480,,30480"
  vboxmanage controlvm "minikube" natpf1 "didowi-ingress-https,tcp,127.0.0.1,30443,,30443"```

This allowed me to easily expose the ports used by my ingress within the kubernetes cluster. This felt clean as I could add/remove rules easily and through a stable solution, with stable ports without having to fiddle with complex configs.

Thanks to this, I could access https://whatever.local:30443 (stable name/stable port), and have it access of all the services exposed by my ingress; whatever.local being in the hosts file and also configured within the app.

If it's not currently possible, then do you think it could be added?
@afbjorklund
Copy link
Collaborator

I don't think there is a (user-accessible) way to add ports to a running container in Docker:

 docker port minikube
22/tcp -> 127.0.0.1:32771
2376/tcp -> 127.0.0.1:32770
5000/tcp -> 127.0.0.1:32769
8443/tcp -> 127.0.0.1:32768

You can work around it with kubectl port-forward, but we don't export any -p flags (yet?)

        // control plane specific options
        params.PortMappings = append(params.PortMappings, oci.PortMapping{
                ListenAddress: oci.DefaultBindIPV4,
                ContainerPort: int32(params.APIServerPort),
        },
                oci.PortMapping{
                        ListenAddress: oci.DefaultBindIPV4,
                        ContainerPort: constants.SSHPort,
                },
                oci.PortMapping{
                        ListenAddress: oci.DefaultBindIPV4,
                        ContainerPort: constants.DockerDaemonPort,
                },
                oci.PortMapping{
                        ListenAddress: oci.DefaultBindIPV4,
                        ContainerPort: constants.RegistryAddonPort,
                },
        )

There was no such flag before either, but VirtualBox looks more flexible than Docker ?

The situation is even worse on Docker Desktop, because there you have VM network too.

@afbjorklund afbjorklund added co/docker-driver Issues related to kubernetes in container kind/feature Categorizes issue or PR as related to a new feature. os/linux priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. labels Jun 6, 2020
@dsebastien
Copy link
Author

Indeed, ports have to be exposed when containers are created (afaik).

With Virtualbox it works fine since it is possible to create port forwarding rules, which is a great match in my use case.

I have multiple ingresses, but also need to expose additional ports
having multiple ingresses, but more importantly using and exposing different services/ports during development so that I can fully develop/debug code running in my development containers.

@afbjorklund
Copy link
Collaborator

You could of course just continue to use the VirtualBox driver

@dsebastien
Copy link
Author

@afbjorklund yes of course, that what I'm doing for now. I was actually not aware of the switch of the default to the docker driver; I've probably missed it in the release notes.

The thing is that I'd hope to waste less resources if I could make use of the docker driver instead of the virtualbox one.

But indeed, for now at least I can keep on working ;-))

@itsbpp
Copy link

itsbpp commented Jul 3, 2020

@afbjorklund I'd like to access the k8s API that is running on the minikube container in more interfaces than just loopback. Inspecting the minikube container I saw that the API port is only exposed on the loopback interface:

...

"8443/tcp": [
                    {
                        "HostIp": "127.0.0.1",
                        "HostPort": "32788"
                    }
            ]
...

Can an option to configure the "HostIp" field be provided so I can set it to match the IP of the desired interface?

EDIT: It can be done on docker when starting the container (https://docs.docker.com/config/containers/container-networking/#published-ports)

@afbjorklund
Copy link
Collaborator

@bolipereira :
Minikube is only intended for local development, there are other options for deploying a publicly available cluster

We should provide some better ingress options of exposing apps, but I'm not sure sure about the apiserver itself

Maybe you could describe your use case - in a new issue ?

Is it something similar to the generic driver #4733 (for none)

@itsbpp
Copy link

itsbpp commented Jul 6, 2020

@afbjorklund I'll open a new issue if you think it makes sense to support my use case:

  • local development work on a laptop
  • Linux workstation running minikube with docker driver and docker registry

Currently, I need to SSH into the machine running the cluster to interact with the k8s API. What I'd like to do is issue kubectl commands to the cluster without having to access it remotely with SSH. That would be possible by exposing the apiserver port on the host. I don't think enabling it by default would be sensible, but I feel that I should have the option to expose the container ports on a network interface other than loopback.

@afbjorklund
Copy link
Collaborator

So if I understand you correctly, it works OK on the workstation but you also want to access it externally (from the laptop).

This scenario is slightly different from the one that I described, where kubernetes is running on a dedicated Linux server.

@itsbpp
Copy link

itsbpp commented Jul 6, 2020

So if I understand you correctly, it works OK on the workstation but you also want to access it externally (from the laptop).

This scenario is slightly different from the one that I described, where kubernetes is running on a dedicated Linux server.

@afbjorklund Yes, that's basically it. Do you think adding this feature makes sense? Should I open another issue if that's the case?

@medyagh
Copy link
Member

medyagh commented Jul 13, 2020

That seems like a good feature to add

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 11, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 10, 2020
@priyawadhwa priyawadhwa added help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Dec 9, 2020
@AskAlice
Copy link

/remove-lifecycle rotten

@toonvanstrijp how does #9404 help this situation?

@toonvanstrijp
Copy link
Contributor

@AskAlice re-reading this issue I’m not definitely sure my PR fixes this.

my PR let’s you enable extra ports to be exposed when using the docker driver.

So if you’re using the docker driver and add the apiserverport to the ports arg when creating the cluster it should be exposed and ready to use for you.

or am I missing something here?

@richkuz
Copy link

richkuz commented Nov 18, 2021

Not sure if this is exactly what you're asking, but if you only need to forward some exposed container ports out of Minikube so they are accessible on your host machine, you can run an SSH tunnel. This is useful if you are using Minikube as a Docker Desktop replacement and aren't running K8s services.

For example, forward traffic on exposed container port 8000 to the host machine on port 8000:

ssh -g -L 8000:localhost:8000 -N -i ~/.minikube/machines/minikube/id_rsa docker@$(minikube ip)

@spowelljr
Copy link
Member

This was implemented with #9404, closing

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/docker-driver Issues related to kubernetes in container help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. os/linux priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.
Projects
None yet
Development

No branches or pull requests