b517f991fe
### What changes were proposed in this pull request? Remove automatically resource coordination support from Standalone. ### Why are the changes needed? Resource coordination is mainly designed for the scenario where multiple workers launched on the same host. However, it's, actually, a non-existed scenario for today's Spark. Because, Spark now can start multiple executors in a single Worker, while it only allow one executor per Worker at very beginning. So, now, it really help nothing for user to launch multiple workers on the same host. Thus, it's not worth for us to bring over complicated implementation and potential high maintain cost for such an impossible scenario. ### Does this PR introduce any user-facing change? No, it's Spark 3.0 feature. ### How was this patch tested? Pass Jenkins. Closes #27722 from Ngone51/abandon_coordination. Authored-by: yi.wu <yi.wu@databricks.com> Signed-off-by: Xingbo Jiang <xingbo.jiang@databricks.com> |
||
---|---|---|
.. | ||
__init__.py | ||
test_appsubmit.py | ||
test_broadcast.py | ||
test_conf.py | ||
test_context.py | ||
test_daemon.py | ||
test_join.py | ||
test_pin_thread.py | ||
test_profiler.py | ||
test_rdd.py | ||
test_rddbarrier.py | ||
test_readwrite.py | ||
test_serializers.py | ||
test_shuffle.py | ||
test_taskcontext.py | ||
test_util.py | ||
test_worker.py |