Doc fixes

This commit is contained in:
Matei Zaharia 2012-09-25 23:59:04 -07:00
parent c5754bb939
commit d51d5e0582
3 changed files with 19 additions and 15 deletions

View file

@ -6,7 +6,7 @@
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
<title>{{ page.title }}</title>
<title>{{ page.title }} - Spark 0.6.0 Documentation</title>
<meta name="description" content="">
<link rel="stylesheet" href="{{HOME_PATH}}css/bootstrap.min.css">

View file

@ -55,10 +55,12 @@ of `project/SparkBuild.scala`, then rebuilding Spark (`sbt/sbt clean compile`).
# Where to Go from Here
**Programming guides:**
* [Spark Programming Guide]({{HOME_PATH}}scala-programming-guide.html): how to get started using Spark, and details on the Scala API
* [Java Programming Guide]({{HOME_PATH}}java-programming-guide.html): using Spark from Java
**Deployment guides:**
* [Running Spark on Amazon EC2]({{HOME_PATH}}ec2-scripts.html): scripts that let you launch a cluster on EC2 in about 5 minutes
* [Standalone Deploy Mode]({{HOME_PATH}}spark-standalone.html): launch a standalone cluster quickly without Mesos
* [Running Spark on Mesos]({{HOME_PATH}}running-on-mesos.html): deploy a private cluster using
@ -66,12 +68,14 @@ of `project/SparkBuild.scala`, then rebuilding Spark (`sbt/sbt clean compile`).
* [Running Spark on YARN]({{HOME_PATH}}running-on-yarn.html): deploy Spark on top of Hadoop NextGen (YARN)
**Other documents:**
* [Configuration]({{HOME_PATH}}configuration.html): customize Spark via its configuration system
* [API docs (Scaladoc)]({{HOME_PATH}}api/core/index.html)
* [Bagel]({{HOME_PATH}}bagel-programming-guide.html): an implementation of Google's Pregel on Spark
* [Contributing to Spark](contributing-to-spark.html)
**External resources:**
* [Spark Homepage](http://www.spark-project.org)
* [AMP Camp](http://ampcamp.berkeley.edu/): a two-day training camp at UC Berkeley that featured talks and exercises
about Spark, Shark, Mesos, and more. [Videos](http://ampcamp.berkeley.edu/agenda-2012),

View file

@ -16,23 +16,23 @@ If you want to test out the YARN deployment mode, you can use the current spark
The command to launch the YARN Client is as follows:
SPARK_JAR=<SPARK_YAR_FILE> ./run spark.deploy.yarn.Client
--jar <YOUR_APP_JAR_FILE>
--class <APP_MAIN_CLASS>
--args <APP_MAIN_ARGUMENTS>
--num-workers <NUMBER_OF_WORKER_MACHINES>
--worker-memory <MEMORY_PER_WORKER>
--worker-cores <CORES_PER_WORKER>
SPARK_JAR=<SPARK_YAR_FILE> ./run spark.deploy.yarn.Client \
--jar <YOUR_APP_JAR_FILE> \
--class <APP_MAIN_CLASS> \
--args <APP_MAIN_ARGUMENTS> \
--num-workers <NUMBER_OF_WORKER_MACHINES> \
--worker-memory <MEMORY_PER_WORKER> \
--worker-cores <CORES_PER_WORKER>
For example:
SPARK_JAR=./core/target/spark-core-assembly-0.6.0-SNAPSHOT.jar ./run spark.deploy.yarn.Client
--jar examples/target/scala-2.9.1/spark-examples_2.9.1-0.6.0-SNAPSHOT.jar
--class spark.examples.SparkPi
--args standalone
--num-workers 3
--worker-memory 2g
--worker-cores 2
SPARK_JAR=./core/target/spark-core-assembly-0.6.0-SNAPSHOT.jar ./run spark.deploy.yarn.Client \
--jar examples/target/scala-2.9.1/spark-examples_2.9.1-0.6.0-SNAPSHOT.jar \
--class spark.examples.SparkPi \
--args standalone \
--num-workers 3 \
--worker-memory 2g \
--worker-cores 2
The above starts a YARN Client programs which periodically polls the Application Master for status updates and displays them in the console. The client will exit once your application has finished running.