File rename
This commit is contained in:
parent
61c4762d45
commit
22b982d2bc
|
@ -98,7 +98,7 @@
|
|||
<ul class="dropdown-menu">
|
||||
<li><a href="configuration.html">Configuration</a></li>
|
||||
<li><a href="tuning.html">Tuning Guide</a></li>
|
||||
<li><a href="cdh-hdp.html">Running with CDH/HDP</a></li>
|
||||
<li><a href="hadoop-third-party-distributions.html">Running with CDH/HDP</a></li>
|
||||
<li><a href="hardware-provisioning.html">Hardware Provisioning</a></li>
|
||||
<li><a href="building-with-maven.html">Building Spark with Maven</a></li>
|
||||
<li><a href="contributing-to-spark.html">Contributing to Spark</a></li>
|
||||
|
|
|
@ -54,9 +54,7 @@ Spark can run in a variety of deployment modes:
|
|||
cores dedicated to Spark on each node.
|
||||
* Run Spark alongside Hadoop using a cluster resource manager, such as YARN or Mesos.
|
||||
|
||||
These options are identical for those using CDH and HDP. Note that if you have a YARN cluster,
|
||||
but still prefer to run Spark on a dedicated set of nodes rather than scheduling through YARN,
|
||||
use `mr1` versions of HADOOP_HOME when compiling.
|
||||
These options are identical for those using CDH and HDP.
|
||||
|
||||
# Inheriting Cluster Configuration
|
||||
If you plan to read and write from HDFS using Spark, there are two Hadoop configuration files that
|
Loading…
Reference in a new issue