[SPARK-30942] Fix the warning for requiring cores to be limiting resources

### What changes were proposed in this pull request?

fix the warning for limiting resources when we don't know the number of executor cores. The issue is that there are places in the Spark code that use cores/task cpus to calculate slots and until the entire Stage level scheduling feature is in, we have to rely on the cores being the limiting resource.

Change the check to only warn when custom resources are specified.

### Why are the changes needed?

fix the check and warn when we should

### Does this PR introduce any user-facing change?

A warning is printed

### How was this patch tested?

manually tested spark-shell with standalone mode, yarn, local mode.

Closes #27686 from tgravescs/SPARK-30942.

Authored-by: Thomas Graves <tgraves@nvidia.com>
Signed-off-by: Thomas Graves <tgraves@apache.org>
This commit is contained in:
Thomas Graves 2020-02-25 10:55:56 -06:00 committed by Thomas Graves
parent ffc0935e64
commit c46c067f39
2 changed files with 4 additions and 5 deletions

View file

@ -2798,7 +2798,7 @@ object SparkContext extends Logging {
defaultProf.maxTasksPerExecutor(sc.conf) < cpuSlots) {
throw new IllegalArgumentException("The number of slots on an executor has to be " +
"limited by the number of cores, otherwise you waste resources and " +
"dynamic allocation doesn't work properly. Your configuration has " +
"some scheduling doesn't work properly. Your configuration has " +
s"core/task cpu slots = ${cpuSlots} and " +
s"${limitingResource} = " +
s"${defaultProf.maxTasksPerExecutor(sc.conf)}. Please adjust your configuration " +

View file

@ -168,7 +168,7 @@ class ResourceProfile(
// limiting resource because the scheduler code uses that for slots
throw new IllegalArgumentException("The number of slots on an executor has to be " +
"limited by the number of cores, otherwise you waste resources and " +
"dynamic allocation doesn't work properly. Your configuration has " +
"some scheduling doesn't work properly. Your configuration has " +
s"core/task cpu slots = ${taskLimit} and " +
s"${execReq.resourceName} = ${numTasks}. " +
"Please adjust your configuration so that all resources require same number " +
@ -183,12 +183,11 @@ class ResourceProfile(
"no corresponding task resource request was specified.")
}
}
if(!shouldCheckExecCores && Utils.isDynamicAllocationEnabled(sparkConf)) {
if(!shouldCheckExecCores && execResourceToCheck.nonEmpty) {
// if we can't rely on the executor cores config throw a warning for user
logWarning("Please ensure that the number of slots available on your " +
"executors is limited by the number of cores to task cpus and not another " +
"custom resource. If cores is not the limiting resource then dynamic " +
"allocation will not work properly!")
"custom resource.")
}
if (taskResourcesToCheck.nonEmpty) {
throw new SparkException("No executor resource configs were not specified for the " +