Conversation
Signed-off-by: Sugu Sougoumarane <ssougou@gmail.com>
Signed-off-by: Sugu Sougoumarane <ssougou@gmail.com>
Signed-off-by: Sugu Sougoumarane <ssougou@gmail.com>
Signed-off-by: Sugu Sougoumarane <ssougou@gmail.com>
Signed-off-by: Sugu Sougoumarane <ssougou@gmail.com>
Signed-off-by: Sugu Sougoumarane <ssougou@gmail.com>
Signed-off-by: Sugu Sougoumarane <ssougou@gmail.com>
Signed-off-by: Sugu Sougoumarane <ssougou@gmail.com>
derekperkins
left a comment
There was a problem hiding this comment.
This is an awesome step forward and will help people focus on Vitess when they're getting started and not have to mess with the cluster setup in the current docs.
Adding the ability to create multiple jobs is also very interesting. It'll be interesting to see how it gets used in practice. Will it make sense to just leave all those in the helm chart? Do you remove them over time? When do they get removed and how does that impact jobs that are left on the cluster? Everything I've done so far is more ad hoc, so I like the idea of having some record.
I wish that helm were smarter and allowed for more control over ordering, but it doesn't. Out of the scope of this PR, there are a lot of interesting workflows for Vitess that could benefit from ordered pipelines. Kubeflow (Google's way to run Tensorflow on k8s) uses Argo - https://github.com/argoproj/argo to manage those pipelines and workflows. It may be worth a conversation to see if it would make sense to use it natively with Vitess.
| host=$(minikube service vtgate-zone1 --format "{{.IP}}" | tail -n 1) | ||
| port=$(minikube service vtgate-zone1 --format "{{.Port}}" | tail -n 1) | ||
|
|
||
| mysql -h "$host" -P "$port" $* |
There was a problem hiding this comment.
Personal preference, but I've never really liked these proxy bash scripts. They add another mental layer of complexity when I think we could probably just print that at the end of the helm output.
mysql -h "$host" -P "$port"
|
In the example, I've been deleting older jobs. The assumption is that these yaml changes will go through a source code control system, which acts as a system of record. Keeping terminated jobs in kubernetes could end up with a rather large clutter of very old things that no one cares about any more. But both approaches are possible. |
This new example will replace the existing kubernetes example. It also builds a full story starting from an unsharded database through a vertical split, and then a horizontal split. To run this on minikube, you need to request more resources than the default:
minikube start --cpus=4 --memory=5000.The yaml files are named as follows: first digit is the phase, the next two digits are the order of execution. So, executing:
helm install/upgrade ... ../../helm/vitess <file>will progress the example forward.PS: I've moved _jobs.tpl to cron_jobs.tpl. I had to add a top level
jobscategory tovalues.yaml