-
Notifications
You must be signed in to change notification settings - Fork 232
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pin docker API version to support minikube #5
Conversation
Minikube currently uses Docker 1.23, and Openwhisk uses a newer client. But Openwhisk doesn't currently require anything that isn't provided by 1.23, so pinning the API version to 1.23 allows Openwhisk to be used with minikube.
Thanks @tobias - LGTM |
which pins the version using this constant: |
@tobias Before accepting the PR (and having never used minikube), your requested change to pin the client seems reasonable; however, are you claiming that this is the only change needed to support minikube? If so, I would like to see documentation to that effect as now we state it is "not supported" due to docker engine version dependencies. Also, if we go down this road, we will need an integration test(s) to verify minikube installation/deployment. |
@animeshsingh apart from "LGTM", can you independently verify this is the only change to get a minikube deployment working? |
Hey, sorry for the delay, I was off last week. This seems fine to me, but like @mrutkows said. It would be great if we could get an update to the Docs so that note about Minikube not being supported is removed, and possibly a link to Minikube as a supported deployment option. |
The last time I tried minikube, you still had to |
Seeing there may be more to this (based upon @bbrowning 's comment) can we get an integration test going? |
@mrutkows so it works with "minikube ssh and ip link set docker0 promisc on" mod. We have created a developer journey targeting bluemix container service and minikube, and are in the process of finalizing travis for both. |
@animesh it is preferable that we include docs/code/travis, etc. enablement here for Minikube within this Apache repo.. I prefer not enabling something (and pinning an old CLI) without proving it works as part of our integration testing and making it accessible to the community. It should "work" here and be validateable from this repo's source code through "manual" docs by others (or automation means). If you are working on a solution that includes travis, docs, etc. then feel free to submit a WIP pull request with all these changes so they can be accessible relative to the Apache community and this repo. It is clear others are interested in this work and this is place we should develop it. |
@tobias Can we make the invoker.yml optionally "pin" the version only for a Minikube deployment through some build target environment via Ansible, that is create a "minikube" target? Obviously, pinning of client is specifically for Minikube and not required for general kube deployments; why should we hard code this restriction? |
@mrutkows agreed. At this time the conclusion is minikube wont work with Travis - if we figure out a way we will add it back here as well |
@mrutkows Yes, we could make this an optional via Ansible. Another option is to wait for the next release of minikube - it should have a newer docker in it (kubernetes/minikube#1542). Though I don't know when that release will be out. |
@tobias I'm not saying "wait"; so perhaps I was being too harsh... all I want is that IF we want to add Minikube as a supported deployment we do not simply "pin" the client version (and the work is done elsewhere), but figure out how we submit a PR that works towards a side-by-side deployment that include a way to document to others this intent and how to build/config/install, as well as Travis integrations to show it runs. I would never go to a project and say "add this library for X" because I need it for my work elsewhere... that is what this feels like. My hope is that we can somehow "pin" versions of other software so that it is not done for every possible deployment (only done for Minikube deployments in this case). Start with this approach and submit a PR with these things (separate subdir with Minikube environments, author a Minikube readme and document the caveats, version issues, pre-reqs. etc.). |
@animeshsingh thanks Animesh, we know Minikube is very popular and we want it to work here and be supportable by this community. |
@mrutkows I only mentioned waiting since it seemed like wasted effort to add special-case ansible configuration to support something that may be a non-issue with the next minikube release. However, since minikube does require setting promiscuous mode as well, it's already a special case, so I agree that having ansible config for it makes sense. I can take a look at adding that, along with a section to the readme about usage. I'll do that as a new PR, and we can close this one. What is the issue with Travis? Is it that they don't support spinning up full VMs? If we can't test minikube via CI, is it still acceptable to add it to the README, but with caveats? |
I just noticed IBM/OpenWhisk-on-Kubernetes#22 - @Tomcli - mind if I use that as a basis for the minikube readme here? |
@tobias it is desirable (or rather imperative) to have Travis CI confirmation of any deployment (target) that we wish to claim support for. In this way, we can assure future changes do not break that support when done for other deployments. It also services as a means to prove (by running example) that it works for developers who wish to replicate. It is simply best practice. |
@mrutkows I agree, CI for this is critical, and I'm well aware of the benefits of CI. But according to @animeshsingh's comment, minikube doesn't (yet) work on Travis. So the choice is to add support for minikube now (with caveats about it only being manually tested), or waiting until it is supported by Travis (or adding another CI provider to the mix). |
@tobias Sure. FYI, IBM/OpenWhisk-on-Kubernetes#22 is just a temporary solution for those who want to try it on Minikube right now. We still need to create a feasible solution in the future. |
@tobias our primary target in IBM/OpenWhisk repo is Bluemix container service with Kubernetes, and we are running CI against that. For minikube @Tomcli mentions, we will rely on the core repo and redirect users here. So what we have with minikube right now is temporary way to get going, until a formal support is added in core. |
Sorry I missed the conversation here for a little bit. @tobias and @mrutkows, the issues @animeshsingh mentioned with running Minikube in Travis CI are that you can not create a VM with either VirtualBox or KVM in Travis CI (regardless of running Travis CI VMs or containers). I suspect it would take a major change to Travis CI VMs to have support. As a work around (really more of a give up and redirect around) I am basing our Travis testing in IBM/OpenWhisk-on-Kubernetes off the work done in this repo. |
@mlangbehn Thanks for the update. If we need to support Jenkins (or other) we can discuss, but sad to see you doing your work in some other IBM repo. as this should be part of the Apache community project. |
Please submit a new PR which would have a README, separate directory structure and working Travis or Jenkins (or close to working). It should also include a strategy to allow "pinning" (install/config) of other dependencies that are specific to Minkube to be selectively included/changed. AND please mark it "experimental" |
Minikube currently uses Docker 1.23, and Openwhisk uses a newer
client. But Openwhisk doesn't currently require anything that isn't
provided by 1.23, so pinning the API version to 1.23 allows Openwhisk to
be used with minikube.
I've signed the CLA as "Tobias Oliver Crawley".