spark-instrumented-optimizer/docs/hadoop-third-party-distributions.md
Andrew Or 2ffd1eafd2 [SPARK-1753 / 1773 / 1814] Update outdated docs for spark-submit, YARN, standalone etc.
YARN
- SparkPi was updated to not take in master as an argument; we should update the docs to reflect that.
- The default YARN build guide should be in maven, not sbt.
- This PR also adds a paragraph on steps to debug a YARN application.

Standalone
- Emphasize spark-submit more. Right now it's one small paragraph preceding the legacy way of launching through `org.apache.spark.deploy.Client`.
- The way we set configurations / environment variables according to the old docs is outdated. This needs to reflect changes introduced by the Spark configuration changes we made.

In general, this PR also adds a little more documentation on the new spark-shell, spark-submit, spark-defaults.conf etc here and there.

Author: Andrew Or <andrewor14@gmail.com>

Closes #701 from andrewor14/yarn-docs and squashes the following commits:

e2c2312 [Andrew Or] Merge in changes in #752 (SPARK-1814)
25cfe7b [Andrew Or] Merge in the warning from SPARK-1753
a8c39c5 [Andrew Or] Minor changes
336bbd9 [Andrew Or] Tabs -> spaces
4d9d8f7 [Andrew Or] Merge branch 'master' of github.com:apache/spark into yarn-docs
041017a [Andrew Or] Abstract Spark submit documentation to cluster-overview.html
3cc0649 [Andrew Or] Detail how to set configurations + remove legacy instructions
5b7140a [Andrew Or] Merge branch 'master' of github.com:apache/spark into yarn-docs
85a51fc [Andrew Or] Update run-example, spark-shell, configuration etc.
c10e8c7 [Andrew Or] Merge branch 'master' of github.com:apache/spark into yarn-docs
381fe32 [Andrew Or] Update docs for standalone mode
757c184 [Andrew Or] Add a note about the requirements for the debugging trick
f8ca990 [Andrew Or] Merge branch 'master' of github.com:apache/spark into yarn-docs
924f04c [Andrew Or] Revert addition of --deploy-mode
d5fe17b [Andrew Or] Update the YARN docs
2014-05-12 19:44:14 -07:00

4.5 KiB

layout title
global Running with Cloudera and HortonWorks

Spark can run against all versions of Cloudera's Distribution Including Apache Hadoop (CDH) and the Hortonworks Data Platform (HDP). There are a few things to keep in mind when using Spark with these distributions:

Compile-time Hadoop Version

When compiling Spark, you'll need to specify the Hadoop version by defining the hadoop.version property. For certain versions, you will need to specify additional profiles. For more detail, see the guide on building with maven:

mvn -Dhadoop.version=1.0.4 -DskipTests clean package
mvn -Phadoop-2.2 -Dhadoop.version=2.2.0 -DskipTests clean package

The table below lists the corresponding hadoop.version code for each CDH/HDP release. Note that some Hadoop releases are binary compatible across client versions. This means the pre-built Spark distribution may "just work" without you needing to compile. That said, we recommend compiling with the exact Hadoop version you are running to avoid any compatibility errors.

CDH Releases

ReleaseVersion code
CDH 4.X.X (YARN mode)2.0.0-cdh4.X.X
CDH 4.X.X2.0.0-mr1-cdh4.X.X
CDH 3u60.20.2-cdh3u6
CDH 3u50.20.2-cdh3u5
CDH 3u40.20.2-cdh3u4

HDP Releases

ReleaseVersion code
HDP 1.31.2.0
HDP 1.21.1.2
HDP 1.11.0.3
HDP 1.01.0.3
HDP 2.02.2.0

In SBT, the equivalent can be achieved by setting the SPARK_HADOOP_VERSION flag:

SPARK_HADOOP_VERSION=1.0.4 sbt/sbt assembly

Linking Applications to the Hadoop Version

In addition to compiling Spark itself against the right version, you need to add a Maven dependency on that version of hadoop-client to any Spark applications you run, so they can also talk to the HDFS version on the cluster. If you are using CDH, you also need to add the Cloudera Maven repository. This looks as follows in SBT:

{% highlight scala %} libraryDependencies += "org.apache.hadoop" % "hadoop-client" % ""

// If using CDH, also add Cloudera repo resolvers += "Cloudera Repository" at "https://repository.cloudera.com/artifactory/cloudera-repos/" {% endhighlight %}

Or in Maven:

{% highlight xml %} ... org.apache.hadoop hadoop-client [version]

... Cloudera repository https://repository.cloudera.com/artifactory/cloudera-repos/

{% endhighlight %}

Where to Run Spark

As described in the Hardware Provisioning guide, Spark can run in a variety of deployment modes:

  • Using dedicated set of Spark nodes in your cluster. These nodes should be co-located with your Hadoop installation.
  • Running on the same nodes as an existing Hadoop installation, with a fixed amount memory and cores dedicated to Spark on each node.
  • Run Spark alongside Hadoop using a cluster resource manager, such as YARN or Mesos.

These options are identical for those using CDH and HDP.

Inheriting Cluster Configuration

If you plan to read and write from HDFS using Spark, there are two Hadoop configuration files that should be included on Spark's classpath:

  • hdfs-site.xml, which provides default behaviors for the HDFS client.
  • core-site.xml, which sets the default filesystem name.

The location of these configuration files varies across CDH and HDP versions, but a common location is inside of /etc/hadoop/conf. Some tools, such as Cloudera Manager, create configurations on-the-fly, but offer a mechanisms to download copies of them.

To make these files visible to Spark, set HADOOP_CONF_DIR in $SPARK_HOME/spark-env.sh to a location containing the configuration files.