[SPARK-23680] Fix entrypoint.sh to properly support Arbitrary UIDs
## What changes were proposed in this pull request? As described in SPARK-23680, entrypoint.sh returns an error code because of a command pipeline execution where it is expected in case of Openshift environments, where arbitrary UIDs are used to run containers ## How was this patch tested? This patch was manually tested by using docker-image-toll.sh script to generate a Spark driver image and running an example against an OpenShift cluster. Please review http://spark.apache.org/contributing.html before opening a pull request. Author: Ricardo Martinelli de Oliveira <rmartine@rmartine.gru.redhat.com> Closes #20822 from rimolive/rmartine-spark-23680.
This commit is contained in:
parent
88d8de9260
commit
9945b0227e
|
@ -22,7 +22,10 @@ set -ex
|
|||
# Check whether there is a passwd entry for the container UID
|
||||
myuid=$(id -u)
|
||||
mygid=$(id -g)
|
||||
# turn off -e for getent because it will return error code in anonymous uid case
|
||||
set +e
|
||||
uidentry=$(getent passwd $myuid)
|
||||
set -e
|
||||
|
||||
# If there is no passwd entry for the container UID, attempt to create one
|
||||
if [ -z "$uidentry" ] ; then
|
||||
|
|
Loading…
Reference in a new issue