[SPARK-29436][K8S] Support executor for selecting scheduler through scheduler name in the case of k8s multi-scheduler scenario

### What changes were proposed in this pull request?

Support executor for selecting scheduler through scheduler name in the case of k8s multi-scheduler scenario.

### Why are the changes needed?

If there is no such function, spark can not support the case of k8s multi-scheduler scenario.

### Does this PR introduce any user-facing change?

Yes, users can add scheduler name through configuration.

### How was this patch tested?

Manually tested with spark + k8s cluster

Closes #26088 from merrily01/SPARK-29436.

Authored-by: maruilei <maruilei@jd.com>
Signed-off-by: Sean Owen <sean.owen@databricks.com>
This commit is contained in:
maruilei 2019-10-17 07:24:13 -07:00 committed by Sean Owen
parent dc0bc7a6eb
commit f800fa3831
2 changed files with 9 additions and 0 deletions

View file

@ -142,6 +142,12 @@ private[spark] object Config extends Logging {
.stringConf
.createOptional
val KUBERNETES_EXECUTOR_SCHEDULER_NAME =
ConfigBuilder("spark.kubernetes.executor.scheduler.name")
.doc("Specify the scheduler name for each executor pod")
.stringConf
.createOptional
val KUBERNETES_EXECUTOR_REQUEST_CORES =
ConfigBuilder("spark.kubernetes.executor.request.cores")
.doc("Specify the cpu request for each executor pod")

View file

@ -216,6 +216,9 @@ private[spark] class BasicExecutorFeatureStep(
.endSpec()
.build()
kubernetesConf.get(KUBERNETES_EXECUTOR_SCHEDULER_NAME)
.foreach(executorPod.getSpec.setSchedulerName)
SparkPod(executorPod, containerWithLimitCores)
}
}