Conversation
That is needed to allow use of a local copy of the base image, per Avi
| @@ -1,4 +1,4 @@ | |||
| FROM velox-adapters-deps:centos9 | |||
| FROM ghcr.io/facebookincubator/velox-dev:adapters | |||
There was a problem hiding this comment.
This is a direct reversion of the change in #64
| @@ -1,13 +0,0 @@ | |||
| FROM ghcr.io/facebookincubator/velox-dev:adapters | |||
There was a problem hiding this comment.
This is a direct reversion of the change in #64
| @@ -1,10 +1,4 @@ | |||
| services: | |||
There was a problem hiding this comment.
This is a direct reversion of the change in #64
| echo "Building Velox dependencies/run-time container image..." | ||
|
|
||
| pushd ../../../velox | ||
| docker compose -f docker-compose.yml --progress plain build adapters-cpp |
There was a problem hiding this comment.
This uses Presto's own Docker recipe to build a local version of the base image. This uses our modified CUDA install script and hence the resulting local image will include nvjitlink which the upstream equivalent does not.
There was a problem hiding this comment.
We also just use docker compose build instead of up/down.
| ../../scripts/validate_directories_exist.sh "../../../velox" | ||
|
|
||
| # Compose docker build command options (default: do not force pull; use local images if present) | ||
| DOCKER_BUILD_OPTS=() |
There was a problem hiding this comment.
This is a direct reversion apart from this line. We still need to remove the --pull option which would otherwise force this build to pull the upstream base image instead of using the local image.
| source ../../scripts/fetch_docker_image_from_s3.sh | ||
|
|
||
| IMAGE_NAME="velox-adapters-deps:centos9" | ||
| IMAGE_NAME="ghcr.io/facebookincubator/velox-dev:adapters" |
There was a problem hiding this comment.
We just change the image name to that of the original base image instead of the intermediate image added by the now-reverted Dockerfile changes.
There was a problem hiding this comment.
x86 and ARM versions of this image are built and ready to copy to S3.
mattgara
left a comment
There was a problem hiding this comment.
LGTM, just had a question about one line if it will cause a build failure.
| ENV VELOX_DEPENDENCY_SOURCE=SYSTEM \ | ||
| GTest_SOURCE=BUNDLED \ | ||
| cudf_SOURCE=BUNDLED \ | ||
| simdjson_SOURCE=BUNDLED \ |
There was a problem hiding this comment.
I'm wondering if this was a necessary change? Does a recent build still work without this?
There was a problem hiding this comment.
It worked fine for me without it, both on x86 and ARM.
There was a problem hiding this comment.
It worked fine for me without it
|
|
||
| ARCH=$(uname -m) | ||
| BUCKET_SUBDIR="velox-docker-images" | ||
| IMAGE_FILE="velox_adapters_deps_image_centos9_${ARCH}.tar.gz" |
There was a problem hiding this comment.
There is no need to change the filenames here. The existing name is still semantically valid, even though the contents are now of an entirely different image from before.
The changes that landed in #64 are more complex than was required. This PR simplifies them, and brings the mechanism more in line with the one for Presto that landed last week.
Instead of creating an intermediate Docker image, we simply leverage the existing mechanism that is used to generate the upstream base image (ghcr.io/facebookincubator/velox-dev:adapters) to build that image locally using the modified Velox build scripts which now include nvjitlink package install.
This mirrors what we expect Facebook to do themselves once our cuDF 25.10 PR lands in upstream. Once that is done, we will then have the option to revert to using the upstream image.
The only other change required is to remove the --pull option from the Velox Docker build, as that would otherwise FORCE the build to pull the upstream version of that image instead of using our modified local version.
The build and fetch scripts are adapted accordingly, and we will generate new x86 and ARM versions of the modified image to put on S3 for the fetch script to use.
If possible this PR should be reviewed in combination with the #64 changes which already landed, as this one reverts part of that one, although that might be tricky as I just landed #48 in between.
Updated again to resolve a conflict as #52 was also landed in between!