3092527f75
### What changes were proposed in this pull request? Rename `spark.worker.decommission.enabled` to `spark.decommission.enabled` and move it from `org.apache.spark.internal.config.Worker` to `org.apache.spark.internal.config.package`. ### Why are the changes needed? Decommission has been supported in Standalone and k8s yet and may be supported in Yarn(https://github.com/apache/spark/pull/27636) in the future. Therefore, the switch configuration should have the highest hierarchy rather than belongs to Standalone's Worker. In other words, it should be independent of the cluster managers. ### Does this PR introduce _any_ user-facing change? No, as the decommission feature hasn't been released. ### How was this patch tested? Pass existed tests. Closes #29466 from Ngone51/fix-decom-conf. Authored-by: yi.wu <yi.wu@databricks.com> Signed-off-by: Wenchen Fan <wenchen@databricks.com> |
||
---|---|---|
.. | ||
src | ||
pom.xml |