Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubectl & dashboard under high load: Unable to connect to the server: net/http: TLS handshake timeout #2946

Closed
leapingbytes opened this issue Jun 28, 2018 · 6 comments
Assignees
Labels
area/guest-vm General configuration issues with the minikube guest VM co/hyperkit Hyperkit related issues kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. os/macos priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. triage/needs-information Indicates an issue needs more information in order to work on it.

Comments

@leapingbytes
Copy link

leapingbytes commented Jun 28, 2018

BUG REPORT

Please provide the following details:

Environment:

Minikube version (use minikube version): v0.28.0

  • OS (e.g. from /etc/os-release): Mac OS 10.13.5 (17F77)
  • VM Driver (e.g. cat ~/.minikube/machines/minikube/config.json | grep DriverName): hyperkit
  • ISO version (e.g. cat ~/.minikube/machines/minikube/config.json | grep -i ISO or minikube ssh cat /etc/VERSION): minikube-v0.28.0.iso
  • Install tools:
  • Others:
    The above can be generated in one go with the following commands (can be copied and pasted directly into your terminal):
minikube version
echo "";
echo "OS:";
cat /etc/os-release
echo "";
echo "VM driver": 
grep DriverName ~/.minikube/machines/minikube/config.json
echo "";
echo "ISO version";
grep -i ISO ~/.minikube/machines/minikube/config.json

What happened:
I have 18 services in my stack, pretty much all of them tomcat:apline + my WAR... works without any problems when I run 12 of them... but if I try to run all 18, after a very short while, kubectl stop responding... getting Unable to connect to the server: net/http: TLS handshake timeout... minikube dashboard does not work either... getting

minikube dashboard
Could not find finalized endpoint being pointed to by kubernetes-dashboard: Error validating service: Error getting service kubernetes-dashboard: Get https://192.168.64.16:8443/api/v1/namespaces/kube-system/services/kubernetes-dashboard: net/http: TLS handshake timeout

and hyperkit process CPU consumption goes through the roof

my minikube cluster was created:

minikube start --vm-driver hyperkit --cpus 8 --memory 8096  --disk-size 100g

and I run it on MacBook Pro (Retina, 15-inch, Mid 2015), 2.8 GHz Intel Core i7, 16 GB 1600 MHz DDR3

What you expected to happen:

Minikube cluster keep going no matter how many pods I have deployed (within limits of memory and cpu)

How to reproduce it (as minimally and precisely as possible):

I have not tried it yet... but I am pretty sure that it does not really matter what kind of pods you trying to deploy (in my case, all pods effectively doing nothing at the point when problem occur)... so trying to deploy 20 generic pods ( for example with tomcat:alpine and some generic WAR) most likely will do the trick

Output of minikube logs (if applicable):

Anything else do we need to know:

This is extremely annoying... I know for the fact that I can run the same stack using docker-composer without any problems...

Any advice would be highly appreciated.

Goes without saying, I would be more than happy and willing to provide any additional information you may require... I really would like to see our app moving to k8s... but to convince the boss, I need to have it running in minikube first.

@tstromberg tstromberg changed the title Kubernetes on Mac is stuck very often. Needs restart all the time kubectl & dashboard under high load: Unable to connect to the server: net/http: TLS handshake timeout Sep 19, 2018
@tstromberg
Copy link
Contributor

It sounds like your Kubernetes environment is running out of resources, though I can't tell if it's CPU or memory from this description. Can you run the following for me?

minikube ssh "vmstat 5 12"

@tstromberg tstromberg added kind/bug Categorizes issue or PR as related to a bug. area/guest-vm General configuration issues with the minikube guest VM os/macos co/hyperkit Hyperkit related issues labels Sep 19, 2018
@clounie
Copy link

clounie commented Dec 11, 2018

I had the same thing happen - and you can probably reproduce my example fairly easily.

I freshly installed Kubernetes on my Mac:

  • 10.14.1 Mojave
  • 2.2Ghz i7
  • 16GB RAM

Exact commands below:

brew update
brew install kubernetes-cli
brew cask install minikube
minikube start
brew install kubernetes-helm
helm init
helm install --name spinnaker-poc stable/spinnaker --timeout 600

My setup (can make a separate issues if wanted but imagine the solution would be the same):
"DriverName": "virtualbox",
minikube-v0.31.0.iso

I used the Spinnaker helm formula as you see above. It installed correctly, but couldn't start up all the pods successfully because of resource issues.

When running kubectl get pods with not enough resources, I got these stats:

(Defaults were 2GB RAM and 1 CPU, I believe)

procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
27  0 797768  16200   6424 1115032  244  618  7213  3843 2511 5106 11 27 61  1  0
20  0 802880  18324   6588 1110792  397 1329  2678  1446 5441 7626 88 12  0  0  0
20  0 804360  19980   7468 1107740  553  802  2414  3562 5105 6717 88 12  0  0  0
28  0 810580  19604   7460 1112364  267 1389  2029  3672 6891 9853 84 16  0  0  0
23  0 815312  16620   7184 1116732  591 1456  3540  6662 7224 10515 82 18  0  0  0
16  0 822692  19928   7480 1113764  342 2837 13759  7193 7007 10888 76 24  0  0  0
20  0 824376  21804   8780 1115544  386  637  3003  3991 6553 9483 82 18  0  0  0
28  0 826348  20100   8928 1114300  327  725  5152   839 6675 9601 83 17  0  0  0
34  0 829092  19372   8872 1111928  230  716  3643  5280 6386 8642 83 17  0  0  0
22  0 825756  27668   8692 1137464  706 1382 10327  8640 7446 10900 77 22  0  0  0
10  0 824220  28900   8772 1142880  226    0  1058  1642 5637 7166 90 10  0  0  0
18  0 822948  19944   8816 1150080  430   58  1471   122 6128 8098 87 13  0  0  0

procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
22  0 823224  17536   8496 1144560  244  614  7018  3771 2541 5088 14 27 58  1  0
21  0 824076  20040   8372 1136196  562  682  2720  6921 5371 6752 87 13  0  0  0
14  0 829432  20168   8500 1133416  157 1174  6546  1834 4903 6199 89 11  0  0  0
29  0 833136  20532   8468 1103852  404 1041  1715  2863 5098 6563 89 11  0  0  0
27  0 832956  21752   8636 1105564  214  122  1555   197 4691 5177 92  8  0  0  0
30  0 836980  25064   8796 1094836  645 1366  5540  2614 5490 7596 85 15  0  0  0
 9  4 834576  13052   9012 1088128 1846  945  6293  1899 5426 6646 87 13  0  0  0
21  0 835040  18704   8848 1077944  270  350  1600   366 4592 5256 92  8  0  0  0
28  0 852968  18828  10416 1080900  251 3794 16304 16261 5283 8185 80 19  0  0  0
31  0 869232  44800   8444 1109108  400 3711 27240 19173 5816 9348 76 24  0  0  0
33  0 874972  16896   8620 1127048  425 1518 24133 23670 5302 8755 80 20  0  0  0
36  0 883956  16412   8000 1103224  388 2043 17478 17853 5064 8372 77 23  0  0  0

Then I restarted minikube with 10GB + 6 CPUs. Changed the settings in VirtualBox, then ran minikube start --cpus 6 --memory 10240
Running the stats while performing kubectl get pods resulted in:

procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
45  0      0 1162520  56012 2275484    0    0   253   277 1472 2164 39 22 38  0  0
 6  0      0 1156756  56036 2275296    0    0    24    60 8340 15557 39 39 22  0  0
 1  0      0 1141356  56068 2275456    0    0     6    88 6410 11848 26 27 48  0  0
 0  0      0 1140356  56076 2275460    0    0     0   138 11383 23207  3  6 91  0  0
 0  0      0 1140408  56116 2275752    0    0     0   136 10189 20499 10 10 80  0  0
 0  0      0 1137948  56180 2276220    0    0    13  2758 11121 22213  7 10 83  0  0
18  0      0 1141624  56212 2276036    0    0    26    84 10882 21412  7  7 86  0  0
 0  0      0 1139412  56212 2276040    0    0     0    26 11293 23750  3  6 92  0  0
 0  0      0 1137020  56236 2276140    0    0    18    65 10221 21604  3  5 91  0  0
11  0      0 1134060  56236 2276132    0    0     0    26 11110 22619  3  7 89  0  0
 1  0      0 1115972  56272 2276288    0    0     0   163 11448 23051  7  6 87  0  0
 0  0      0 1110500  56312 2276340    0    0    10    58 11494 23899  4  5 91  0  0

I had no issue once I gave it more resources - but perhaps there should be a warning about this on the releases page which gets linked to by the Kubernetes docs

@tstromberg
Copy link
Contributor

I can see loads of swapping going on. I'm curious if resolving #3012 would make this less problematic.

Regardless, we should be able to get the apiserver setup in such a way that it still answers kubectl requests properly. I'm surprised it doesn't by default.

@tstromberg tstromberg added the priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. label Jan 24, 2019
@tstromberg
Copy link
Contributor

I think that this may have been addressed by #3671 - which shipped with minikube v0.34.1. Do you mind trying it and reporting back your results?

If it still fails, please include the output of minikube logs - it now contains more information.

@tstromberg tstromberg added the triage/needs-information Indicates an issue needs more information in order to work on it. label Feb 20, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 21, 2019
@tstromberg
Copy link
Contributor

I believe this issue was resolved in the v1.1.0 release. Please try upgrading to the latest release of minikube, and if the same issue occurs, please re-open this bug. Thank you opening this bug report, and for your patience!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/guest-vm General configuration issues with the minikube guest VM co/hyperkit Hyperkit related issues kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. os/macos priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. triage/needs-information Indicates an issue needs more information in order to work on it.
Projects
None yet
Development

No branches or pull requests

5 participants