Skip to content

Conversation

@tmckayus
Copy link

@tmckayus tmckayus commented Jun 14, 2018

In order to use the spark-on-k8s-operator on a platform where a user
is assigned based on a security context (OpenShift), the /etc/passwd
file needs to be modified in an entrypoint like it is in the standard
Apache Spark 2.3 image to support an assigned user (note that it
won't be changed for a root user). Change the spark-operator.yaml to
call the entrypoint and pass /usr/bin/spark-operator as the first argument.

In order to use the spark-on-k8s-operator on a platform where a user
is assigned based on a security context (OpenShift), the /etc/passwd
file needs to be modified in an entrypoint like it is in the standard
Apache Spark 2.3 image to support an assigned user. Also, to avoid overriding
the entrypoint, the command in spark-operator.yaml should be left defaulted
but the args still passed. The /etc/passwd file will be unchanged for a root
user since root is already in the file.
@tmckayus
Copy link
Author

tmckayus commented Jun 14, 2018

@liyinan926 please take a look. I've been playing with the spark-on-k8s-operator on OpenShift and I think this is a transparent change to make it work there

@erikerlandson
Copy link

I'm wondering if there is any clever way we can use the spark upstream's entrypoint, since this is layered on top of those images

@tmckayus
Copy link
Author

@erikerlandson that's a good point

It has a from "gcr.io/ynli-k8s/spark:v2.3.0", which would have the spark entrypoint. The only trouble I see is that the spark entrypoint rejects commands it doesn't recognize. We just need the preamble and then to run the operator.

@tmckayus
Copy link
Author

Instead of hardwiring the command in /entrypoint in the image,
just pass /usr/bin/spark-operator as the initial arg to retain
full flexibility from the yaml file.
@@ -0,0 +1,37 @@
#!/bin/bash
#
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: missing Copyright 2018 Google LLC in the license header.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the delay. This snippet was cut directly from the Apache Spark distribution standard entrypoint.sh, the only difference is an exec after the copied portion.There is no Copyright in the original, can we / should we really add one? https://github.com/apache/spark/blob/master/resource-managers/kubernetes/docker/src/main/dockerfiles/spark/entrypoint.sh

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can ignore this for now.

RUN go generate && go build -o /usr/bin/spark-operator


FROM gcr.io/ynli-k8s/spark:v2.3.0
Copy link
Collaborator

@liyinan926 liyinan926 Jul 17, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will also require an image built using the Dockerfile in the latest spark/master to pick up apache/spark#21572, right?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not the OP, but to get at least examples working on the newest release of Openshift (besides this pull request) spark/master entrypoint requires (which newest version does):

set +e
uidentry=$(getent passwd $myuid)
set -e

This and granting write access to spark working directory should make it work.

@skonto
Copy link
Contributor

skonto commented Nov 28, 2018

We have similar code running fine on Openshift will contribute it here and push forward this PR as it is useful. @yuchaoran2011 ^^

@liyinan926
Copy link
Collaborator

This is covered by #343.

@liyinan926 liyinan926 closed this Dec 18, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants