When deploying to AWS, there is additional configuration that is required to read S3 files. EMR creates it automatically, there is no reason that the Spark EC2 script shouldn't.
This PR requires a corresponding PR to the mesos/spark-ec2 to be merged, as it gets cloned in the process of setting up machines: https://github.com/mesos/spark-ec2/pull/58
Author: Dan Osipov <daniil.osipov@shazam.com>
Closes#1120 from danosipov/s3_credentials and squashes the following commits:
758da8b [Dan Osipov] Modify documentation to include the new parameter
71fab14 [Dan Osipov] Use a parameter --copy-aws-credentials to enable S3 credential deployment
7e0da26 [Dan Osipov] Get AWS credentials out of boto connection instance
39bdf30 [Dan Osipov] Add S3 configuration parameters to the EC2 deploy scripts
- Uses the name tag to identify machines in a cluster.
- Allows overriding the security group name so it doesn't need to coincide with the cluster name.
- Outputs the request id's of up to 10 pending spot instance requests.
Author: Vida Ha <vida@databricks.com>
Closes#1899 from vidaha/vida/ec2-reuse-security-group and squashes the following commits:
c80d5c3 [Vida Ha] wrap retries in a try catch block
b2989d5 [Vida Ha] SPARK-2333: spark_ec2 script should allow option for existing security group
throughout the docs: SPARK_VERSION, SCALA_VERSION, and MESOS_VERSION.
To use them, e.g. use {{site.SPARK_VERSION}}.
Also removes uses of {{HOME_PATH}} which were being resolved to ""
by the templating system anyway.
- Rework/expand the nav bar with more of the docs site
- Removing parts of docs about EC2 and Mesos that differentiate between
running 0.5 and before
- Merged subheadings from running-on-amazon-ec2.html that are still relevant
(i.e., "Using a newer version of Spark" and "Accessing Data in S3") into
ec2-scripts.html and deleted running-on-amazon-ec2.html
- Added some TODO comments to a few docs
- Updated the blurb about AMP Camp
- Renamed programming-guide to spark-programming-guide
- Fixing typos/etc. in Standalone Spark doc