-
Notifications
You must be signed in to change notification settings - Fork 382
Comments
/dibs Hope that it also makes those builds more stable. In the first iteration, we can split prow xbuild job to:
and after closing that issue #2658 and #2189 then we can even remove the user-broker-image job. But maybe after that, we should consider if building each image e.g. service-catalog-image for 5 different architectures (amd64 arm arm64 ppc64le s390x) is really necessary? What is the purpose of doing that? Any thoughts @jberkhahn? |
/assign @mszostok Do we still have the #dibs in the docs somewhere? We should be using the kubernetes prow commands now. |
@MHBauer tbh I just followed the Jonathan approach from this issue #2189 (comment) :D and about the |
oh, yea, I forgot that people have to be org member for assignment. Even so, that's the only thing, and it's helpful for us to use the tooling we have. It's easier to check while browsing, what's been assigned, and who to talk to if you want and update on progress or to take over. |
I don't know about arm or arm64, but IBM requires amd64, ppc64le & s390x. |
could you point me, where you are using that? I understand that the But what about the user-broker-image, test-broker-image, service-catalog-image, healthcheck-image? In production, they are executed as a docker container inside the Kubernetes cluster and they are shipped with docker image with the amd64 arch. (check that job: https://travis-ci.org/kubernetes-sigs/service-catalog/jobs/540981158) TimesWhen we split prow xbuild job to:
and each of them will be built against 5 different arch then each build will take around 25minutes. ErrorsThe bottom line is that the amd and amd64 never failed with the So leaving only those build will also increase our builds stability. |
Discussed on the SIG meeting. The service catalog binary is executed directly on OS, instead of using the default docker image. Currently, we want to leave support for all of the architecture. |
DescriptionI've created PR to split the Instead of having one pipeline for building all images I split that by arch into:
pros:
cons:
I've tested that solution by executing those build on our test-infra clusters, results:
Sum duration: 43m55s Based on the pros IMO is worth to have those additional 5 pipelines. What's more, they are split by arch so it's a fixed number of pipeline regardless of the number of domains (brokers, apiservers, healtchecks etc.) Outdated solutionMy first approach was to split build by domain into:
pros:
cons:
|
Resolved by: kubernetes/test-infra#13855 (merged&tested) |
wow, nice! |
currently our prow xbuild job takes nearly an hour to run. It would be nice if we could split this job up into different jobs that built the images for different architectures that run separately. We should already have most of the cruft required for this in our Makefile, so it should mostly be a matter of writing the prow job yaml.
The text was updated successfully, but these errors were encountered: