-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
minikube tunnel #3015
minikube tunnel #3015
Changes from 2 commits
8fb3edb
c04b84d
a7d124d
dcf208e
9e73808
34c5d88
2114e4c
228ed39
94d19b1
9080b0e
e70d66a
25f46c7
8ef4fe2
5895fef
efeaace
f16460a
a7f5f28
10a716a
a475aa3
701bfe0
edbe1b7
8158a92
b264ebf
4efd3e0
3edb93f
5cf91e0
0e6f9dc
f2e0ee0
157eeef
2dbcd93
7accb7d
6116038
cb45876
9b94eb0
a695115
ace1146
295dce0
76e5d9d
a7b0c54
8f05699
bcd9a43
5fc3490
1323e56
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,87 @@ | ||
/* | ||
Copyright 2018 The Kubernetes Authors All rights reserved. | ||
|
||
Licensed under the Apache License, Version 2.0 (the "License"); | ||
you may not use this file except in compliance with the License. | ||
You may obtain a copy of the License at | ||
|
||
http://www.apache.org/licenses/LICENSE-2.0 | ||
|
||
Unless required by applicable law or agreed to in writing, software | ||
distributed under the License is distributed on an "AS IS" BASIS, | ||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
See the License for the specific language governing permissions and | ||
limitations under the License. | ||
*/ | ||
|
||
package cmd | ||
|
||
import ( | ||
"context" | ||
"flag" | ||
"github.com/golang/glog" | ||
"github.com/spf13/cobra" | ||
"k8s.io/minikube/pkg/minikube/config" | ||
"k8s.io/minikube/pkg/minikube/machine" | ||
"k8s.io/minikube/pkg/minikube/service" | ||
"k8s.io/minikube/pkg/minikube/tunnel" | ||
"os" | ||
"os/signal" | ||
) | ||
|
||
var cleanup bool | ||
|
||
// tunnelCmd represents the tunnel command | ||
var tunnelCmd = &cobra.Command{ | ||
Use: "tunnel", | ||
Short: "tunnel makes services of type LoadBalancer accessible on localhost", | ||
Long: `tunnel creates a route to services deployed with type LoadBalancer and sets their Ingress to their ClusterIP`, | ||
PersistentPreRun: func(cmd *cobra.Command, args []string) { | ||
RootCmd.PersistentPreRun(cmd, args) | ||
}, | ||
Run: func(cmd *cobra.Command, args []string) { | ||
flag.Lookup("logtostderr").Value.Set("true") | ||
manager := tunnel.NewManager() | ||
|
||
if cleanup { | ||
glog.Info("Checking for tunnels to cleanup...") | ||
if err := manager.CleanupNotRunningTunnels(); err != nil { | ||
glog.Errorf("error cleaning up: %s", err) | ||
} | ||
return | ||
} | ||
|
||
glog.Infof("Creating docker machine client...") | ||
api, e := machine.NewAPIClient() | ||
if e != nil { | ||
glog.Fatalf("error creating dockermachine client: %s", e) | ||
} | ||
machineName := config.GetMachineName() | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. To maximize readability context, move this variable definition to just before you use it. Or, in this case, just use config.GetNameMachine() for the one usage it has. |
||
|
||
glog.Infof("Creating k8s client...") | ||
clientset, e := service.K8s.GetClientset() | ||
if e != nil { | ||
glog.Fatalf("error creating K8S clientset: %s", e) | ||
} | ||
v1 := clientset.CoreV1() | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Move this variable to just before it's required, or omit it's declaration altogether. |
||
|
||
ctrlC := make(chan os.Signal, 1) | ||
signal.Notify(ctrlC, os.Interrupt) | ||
ctx, cancel := context.WithCancel(context.Background()) | ||
go func() { | ||
<-ctrlC | ||
cancel() | ||
}() | ||
|
||
done, e := manager.StartTunnel(ctx, machineName, api, config.Loader, v1) | ||
if e != nil { | ||
glog.Fatalf("error starting tunnel: %s", e) | ||
} | ||
<-done | ||
}, | ||
} | ||
|
||
balopat marked this conversation as resolved.
Show resolved
Hide resolved
|
||
func init() { | ||
tunnelCmd.Flags().BoolVarP(&cleanup, "cleanup", "c", false, "call with cleanup=true to remove old tunnels") | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Do you mind describing how leaving this default to False helps users? Would it otherwise cleanup tunnels that are still useful to them? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I guess we can change it to auto-cleanup and try cleaning up every time, as cleanup does leave the useful tunnels (still running ones) in place. I was just operating under the assumption that There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The cleanup command is supposed to only cleanup routes that are exact matches of minikube CIDR -> minikube IP and the process should not be running. The chances that a user has created an exact same route that she doesn't want deleted is minimal, so I guess we can safely include this at the start as well. However a standalone cleanup command is still useful to have without having to start a tunnel command - |
||
RootCmd.AddCommand(tunnelCmd) | ||
} |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -43,12 +43,13 @@ var updateContextCmd = &cobra.Command{ | |
os.Exit(1) | ||
} | ||
defer api.Close() | ||
ip, err := cluster.GetHostDriverIP(api) | ||
machineName := config.GetMachineName() | ||
ip, err := cluster.GetHostDriverIP(api, machineName) | ||
if err != nil { | ||
glog.Errorln("Error host driver ip status:", err) | ||
cmdUtil.MaybeReportErrorAndExit(err) | ||
} | ||
kstatus, err := kcfg.UpdateKubeconfigIP(ip, constants.KubeconfigPath, config.GetMachineName()) | ||
kstatus, err := kcfg.UpdateKubeconfigIP(ip, constants.KubeconfigPath, machineName) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This issue was here before, but kstatus is a rather non-idiomatic way to declare a bool, particularly one that returns whether or not the IP was reconfigured. Use "ok". I was expecting "kstatus" to be some sort of rich status message reflecting the state of a Kubernetes cluster. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. ack |
||
if err != nil { | ||
glog.Errorln("Error kubeconfig status:", err) | ||
cmdUtil.MaybeReportErrorAndExit(err) | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -10,3 +10,40 @@ To determine the NodePort for your service, you can use a `kubectl` command like | |
We also have a shortcut for fetching the minikube IP and a service's `NodePort`: | ||
|
||
`minikube service --url $SERVICE` | ||
|
||
### LoadBalancer emulation (`minikube tunnel`) | ||
|
||
Services of type `LoadBalancer` can be exposed via the `minikube tunnel` command. | ||
|
||
````shell | ||
$ sudo "PATH=$PATH" "HOME=$HOME" minikube tunnel | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Context: https://kubernetes.io/docs/contribute/style/style-guide/#code-snippet-formatting There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is HOME=HOME necessary? sudo by default does not modify $HOME, although this could of course be set in /etc/sudoers. Passing PATH in scares me slightly, from a security point of view. Which command makes it necessary to do so? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Question: is it possible for minikube to run the necessary underlying commands that need access as root rather than the entire thing as root? Running minikube as root seems like a great risk for system stability and security, and could potentially cause issues around file ownership. For example, the root user may not have permission to make modifications to ~/.minikube, or the regular usage of "minikube logs" may not have permissions to the logs created in this sudo mode. Small coding mistakes can be very expensive as root: ValveSoftware/steam-for-linux#3671 |
||
[sudo] password for *****: | ||
INFO[0000] Creating docker machine client... | ||
INFO[0000] Creating k8s client... | ||
INFO[0000] Setting up router... | ||
INFO[0000] Setting up tunnel... | ||
INFO[0000] Started minikube tunnel. Tunnel requires root access. Please wait for the password prompt! | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm not actually sure this output is strictly useful for users. Though it may be more useful if you highlight the specific useful information and how they can use it with a web browser to visit the site. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is this for sudo? sudo won't always prompt for a password. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. true, it doesn. Also, this is actually old, I have removed this line since. I should remove it from the docs as well. Good catch! |
||
INFO[0005] Adding route for CIDR 10.96.0.0/12 to gateway 192.168.39.90 | ||
INFO[0005] About to run command: [sudo ip route add 10.96.0.0/12 via 192.168.39.90] | ||
INFO[0005] | ||
TunnelState: | ||
minikube: Running | ||
route: 10.96.0.0/12 -> 192.168.39.90 | ||
services: [] | ||
INFO[0055] Patched nginx with IP 10.101.207.169 | ||
TunnelState: | ||
minikube: Running | ||
route: 10.96.0.0/12 -> 192.168.39.90 | ||
services: [nginx] | ||
```` | ||
|
||
|
||
`minikube tunnel` runs as a separate daemon, creates a network route on the host to the service CIDR of the cluster using the cluster's IP address as a gateway. | ||
Adding a route requires root privileges for the user, and thus there are differences in how to run `minikube tunnel` depending on the OS. | ||
|
||
Recommended way to use on Linux with KVM2 driver and MacOSX with Hyperkit driver: | ||
|
||
`sudo "PATH=$PATH" "HOME=$HOME" minikube tunnel` | ||
|
||
Using VirtualBox on Windows _both_ `minikube start` and `minikube tunnel` needs to be started from the same Administrator user session otherwise [VBoxManage can't recognize the created VM](https://forums.virtualbox.org/viewtopic.php?f=6&t=81551). | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,144 @@ | ||
# Minikube Tunnel Design Doc | ||
|
||
## Background | ||
|
||
Minikube today only exposes a single IP address for all cluster and VM communication. | ||
This effectively requires users to connect to any running Pods, Services or LoadBalancers over ClusterIPs, which can require modifications to workflows when compared to developing against a production cluster. | ||
|
||
A main goal of Minikube is to minimize the differences required in code and configuration between development and production, so this is not ideal. | ||
If all cluster IP addresses and Load Balancers were made available on the minikube host machine, these modifications would not be necessary and users would get the "magic" experience of developing from inside a cluster. | ||
|
||
Tools like telepresence.io, sshuttle, and the OpenVPN chart provide similar capabilities already. | ||
|
||
Also, Steve Sloka has provided a very detailed guide on how to setup a similar configuration [manually](https://stevesloka.com/2017/06/12/access-minikube-service-from-linux-host/). | ||
|
||
Elson Rodriguez has provided a similar guide, including a Minikube [external LB controller](https://github.com/elsonrodriguez/minikube-lb-patch). | ||
|
||
## Example usage | ||
|
||
```shell | ||
$ minikube tunnel | ||
Starting minikube tunnel process. Press Ctrl+C to exit. | ||
All cluster IPs and load balancers are now available from your host machine. | ||
``` | ||
|
||
## Overview | ||
|
||
We will introduce a new command, `minikube tunnel`, that must be run with root permissions. | ||
This command will: | ||
|
||
* Establish networking routes from the host into the VM for all IP ranges used by Kubernetes. | ||
* Enable a cluster controller that allocates IPs to services external `LoadBalancer` IPs. | ||
* Clean up routes and IPs when stopped, or when `minikube` stops. | ||
|
||
Additionally, we will introduce a Minikube LoadBalancer controller that manages a CIDR of IPs and assigns them to services of type `LoadBalancer`. | ||
These IPs will also be made available on the host machine. | ||
|
||
## Network Routes | ||
|
||
Minikube drivers usually establish "host-only" IP addresses (192.168.1.1, for example) that route into the running VM | ||
from the host. | ||
|
||
The new `minikube tunnel` command will create a static routing table entry that maps the CIDRs used by Pods, Services and LoadBalancers to the host-only IP, obtainable via the `minikube ip` command. | ||
|
||
The commands below detail adding routes for the entire `/8` block, we should probably add individual entries for each CIDR we manage instead. | ||
|
||
### Linux | ||
|
||
Route entries for the entire 10.* block can be added via: | ||
|
||
```shell | ||
sudo ip route add 10.0.0.0/8 via $(minikube ip) | ||
``` | ||
|
||
and deleted via: | ||
|
||
```shell | ||
sudo ip route delete 10.0.0.0/8 | ||
``` | ||
|
||
The routing table can be queried with `netstat -nr -f inet` | ||
|
||
### OSX | ||
|
||
Route entries can be added via: | ||
|
||
```shell | ||
sudo route -n add 10.0.0.0/8 $(minikube ip) | ||
``` | ||
|
||
and deleted via: | ||
|
||
```shell | ||
sudo route -n delete 10.0.0.0/8 | ||
|
||
``` | ||
|
||
The routing table can be queried with `netstat -nr -f inet` | ||
|
||
### Windows | ||
|
||
Route entries can be added via: | ||
|
||
```shell | ||
route ADD 10.0.0.0 MASK 255.0.0.0 <minikube ip> | ||
``` | ||
|
||
and deleted via: | ||
|
||
```shell | ||
route DELETE 10.0.0.0 | ||
``` | ||
|
||
The routing table can be queried with `route print -4` | ||
|
||
### Handling unclean shutdowns | ||
|
||
Unclean shutdowns of the tunnel process can result in partially executed cleanup process, leaving network routes in the routing table. | ||
We will keep track of the routes created by each tunnel in a centralized location in the main minikube config directory. | ||
This list serves as a registry for tunnels containing information about | ||
- machine profile | ||
- process ID | ||
- and the route that was created | ||
|
||
The cleanup command cleans the routes from both the routing table and the registry for tunnels that are not running: | ||
|
||
``` | ||
minikube tunnel --cleanup | ||
``` | ||
|
||
Updating the tunnel registry and the routing table is an atomic transaction: | ||
|
||
- create route in the routing table + create registry entry if both are successful, otherwise rollback | ||
- delete route in the routing table + remove registry entry if both are successful, otherwise rollback | ||
|
||
*Note*: because we don't support currently real multi cluster setup (due to overlapping CIDRs), the handling of running/not-running processes is not strictly required however it is forward looking. | ||
|
||
### Handling routing table conflicts | ||
|
||
A routing table conflict happens when a destination CIDR of the route required by the tunnel overlaps with an existing route. | ||
Minikube tunnel will warn the user if this happens and should not create the rule. | ||
There should not be any automated removal of conflicting routes. | ||
|
||
*Note*: If the user removes the minikube config directory, this might leave conflicting rules in the network routing table that will have to be cleaned up manually. | ||
|
||
|
||
## Load Balancer Controller | ||
|
||
In addition to making IPs routable, minikube tunnel will assign an external IP (the ClusterIP) to all services of type `LoadBalancer`. | ||
|
||
The logic of this controller will be, roughly: | ||
|
||
```python | ||
for service in services: | ||
if service.type == "LoadBalancer" and len(service.ingress) == 0: | ||
add_ip_to_service(ClusterIP, service) | ||
sleep | ||
``` | ||
|
||
Note that the Minikube ClusterIP can change over time (during system reboots) and this loop should also handle reconcilliation of those changes. | ||
|
||
## Handling multiple clusters | ||
|
||
Multiple clusters are currently not supported due to our inability to specify ServiceCIDR. | ||
This causes conflicting routes having the same destination CIDR. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this a debugging left-over? It seems odd to me to ignore the flag value for just this one command.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, I was unsure about logging. If I leave this out, there is not much output. But using
fmt.Print*
felt "hacky" coming from the Java background - System.out.println is a no-no as that cannot be controlled by any form of loglevel setting.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
removed