Commit graph

132 commits

Author SHA1 Message Date
alexdebrie 794f3aec24 [SPARK-4745] Fix get_existing_cluster() function with multiple security groups
The current get_existing_cluster() function would only find an instance belonged to a cluster if the instance's security groups == cluster_name + "-master" (or "-slaves"). This fix allows for multiple security groups by checking if the cluster_name + "-master" security group is in the list of groups for a particular instance.

Author: alexdebrie <alexdebrie1@gmail.com>

Closes #3596 from alexdebrie/master and squashes the following commits:

9d51232 [alexdebrie] Fix get_existing_cluster() function with multiple security groups
2014-12-04 14:14:39 -08:00
Nicholas Chammas 317e114e11 [SPARK-3398] [SPARK-4325] [EC2] Use EC2 status checks.
This PR re-introduces [0e648bc](0e648bc2be) from PR #2339, which somehow never made it into the codebase.

Additionally, it removes a now-unnecessary linear backoff on the SSH checks since we are blocking on EC2 status checks before testing SSH.

Author: Nicholas Chammas <nicholas.chammas@gmail.com>

Closes #3195 from nchammas/remove-ec2-ssh-backoff and squashes the following commits:

efb29e1 [Nicholas Chammas] Revert "Remove linear backoff."
ef3ca99 [Nicholas Chammas] reuse conn
adb4eaa [Nicholas Chammas] Remove linear backoff.
55caa24 [Nicholas Chammas] Check EC2 status checks before SSH.
2014-11-29 00:31:06 -08:00
Sean Owen 48223d8815 SPARK-1450 [EC2] Specify the default zone in the EC2 script help
This looks like a one-liner, so I took a shot at it. There can be no fixed default availability zone since the names are different per region. But the default behavior can be documented:

```
    if opts.zone == "":
        opts.zone = random.choice(conn.get_all_zones()).name
```

Author: Sean Owen <sowen@cloudera.com>

Closes #3454 from srowen/SPARK-1450 and squashes the following commits:

9193cf3 [Sean Owen] Document that --zone defaults to a single random zone
2014-11-28 17:43:38 -05:00
Xiangrui Meng 7eba0fbe45 [Spark-4509] Revert EC2 tag-based cluster membership patch
This PR reverts changes related to tag-based cluster membership. As discussed in SPARK-3332, we didn't figure out a safe strategy to use tags to determine cluster membership, because tagging is not atomic. The following changes are reverted:

SPARK-2333: 94053a7b76
SPARK-3213: 7faf755ae4
SPARK-3608: 78d4220fa0.

I tested launch, login, and destroy. It is easy to check the diff by comparing it to Josh's patch for branch-1.1:

https://github.com/apache/spark/pull/2225/files

JoshRosen I sent the PR to master. It might be easier for us to keep master and branch-1.2 the same at this time. We can always re-apply the patch once we figure out a stable solution.

Author: Xiangrui Meng <meng@databricks.com>

Closes #3453 from mengxr/SPARK-4509 and squashes the following commits:

f0b708b [Xiangrui Meng] revert 94053a7b76
4298ea5 [Xiangrui Meng] revert 7faf755ae4
35963a1 [Xiangrui Meng] Revert "SPARK-3608 Break if the instance tag naming succeeds"
2014-11-25 16:07:09 -08:00
Nicholas Chammas db45f5ad03 [SPARK-4137] [EC2] Don't change working dir on user
This issue was uncovered after [this discussion](https://issues.apache.org/jira/browse/SPARK-3398?focusedCommentId=14187471&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14187471).

Don't change the working directory on the user. This breaks relative paths the user may pass in, e.g., for the SSH identity file.

```
./ec2/spark-ec2 -i ../my.pem
```

This patch will preserve the user's current working directory and allow calls like the one above to work.

Author: Nicholas Chammas <nicholas.chammas@gmail.com>

Closes #2988 from nchammas/spark-ec2-cwd and squashes the following commits:

f3850b5 [Nicholas Chammas] pep8 fix
fbc20c7 [Nicholas Chammas] revert to old commenting style
752f958 [Nicholas Chammas] specify deploy.generic path absolutely
bcdf6a5 [Nicholas Chammas] fix typo
77871a2 [Nicholas Chammas] add clarifying comment
ce071fc [Nicholas Chammas] don't change working dir
2014-11-05 20:45:35 -08:00
Nicholas Chammas 2aca97c7cf [EC2] Factor out Mesos spark-ec2 branch
We reference a specific branch in two places. This patch makes it one place.

Author: Nicholas Chammas <nicholas.chammas@gmail.com>

Closes #3008 from nchammas/mesos-spark-ec2-branch and squashes the following commits:

10a6089 [Nicholas Chammas] factor out mess spark-ec2 branch
2014-11-03 09:02:35 -08:00
Josh Rosen f706823b71 Fetch from branch v4 in Spark EC2 script. 2014-10-08 22:25:15 -07:00
Nicholas Chammas 5912ca6714 [SPARK-3398] [EC2] Have spark-ec2 intelligently wait for specific cluster states
Instead of waiting arbitrary amounts of time for the cluster to reach a specific state, this patch lets `spark-ec2` explicitly wait for a cluster to reach a desired state.

This is useful in a couple of situations:
* The cluster is launching and you want to wait until SSH is available before installing stuff.
* The cluster is being terminated and you want to wait until all the instances are terminated before trying to delete security groups.

This patch removes the need for the `--wait` option and removes some of the time-based retry logic that was being used.

Author: Nicholas Chammas <nicholas.chammas@gmail.com>

Closes #2339 from nchammas/spark-ec2-wait-properly and squashes the following commits:

43a69f0 [Nicholas Chammas] short-circuit SSH check; linear backoff
9a9e035 [Nicholas Chammas] remove extraneous comment
26c5ed0 [Nicholas Chammas] replace print with write()
bb67c06 [Nicholas Chammas] deprecate wait option; remove dead code
7969265 [Nicholas Chammas] fix long line (PEP 8)
126e4cf [Nicholas Chammas] wait for specific cluster states
2014-10-07 16:54:32 -07:00
Nicholas Chammas aedd251c54 [EC2] Sort long, manually-inputted dictionaries
Similar to the work done in #2571, this PR just sorts the remaining manually-inputted dicts in the EC2 script so they are easier to maintain.

Author: Nicholas Chammas <nicholas.chammas@gmail.com>

Closes #2578 from nchammas/ec2-dict-sort and squashes the following commits:

f55c692 [Nicholas Chammas] sort long dictionaries
2014-09-29 10:45:08 -07:00
Nicholas Chammas 1651cc117d [EC2] Cleanup Python parens and disk dict
Minor fixes:
* Remove unnecessary parens (Python style)
* Sort `disks_by_instance` dict and remove duplicate `t1.micro` key

Author: Nicholas Chammas <nicholas.chammas@gmail.com>

Closes #2571 from nchammas/ec2-polish and squashes the following commits:

9d203d5 [Nicholas Chammas] paren and dict cleanup
2014-09-28 21:55:09 -07:00
Shivaram Venkataraman 50f8633653 [SPARK-3659] Set EC2 version to 1.1.0 and update version map
This brings the master branch in sync with branch-1.1

Author: Shivaram Venkataraman <shivaram@cs.berkeley.edu>

Closes #2510 from shivaram/spark-ec2-version and squashes the following commits:

bb0dd16 [Shivaram Venkataraman] Set EC2 version to 1.1.0 and update version map
2014-09-24 11:34:39 -07:00
Vida Ha 78d4220fa0 SPARK-3608 Break if the instance tag naming succeeds
Author: Vida Ha <vida@databricks.com>

Closes #2466 from vidaha/vida/spark-3608 and squashes the following commits:

9509776 [Vida Ha] Break if the instance tag naming succeeds
2014-09-20 01:24:49 -07:00
Dan Osipov b20171267d [SPARK-787] Add S3 configuration parameters to the EC2 deploy scripts
When deploying to AWS, there is additional configuration that is required to read S3 files. EMR creates it automatically, there is no reason that the Spark EC2 script shouldn't.

This PR requires a corresponding PR to the mesos/spark-ec2 to be merged, as it gets cloned in the process of setting up machines: https://github.com/mesos/spark-ec2/pull/58

Author: Dan Osipov <daniil.osipov@shazam.com>

Closes #1120 from danosipov/s3_credentials and squashes the following commits:

758da8b [Dan Osipov] Modify documentation to include the new parameter
71fab14 [Dan Osipov] Use a parameter --copy-aws-credentials to enable S3 credential deployment
7e0da26 [Dan Osipov] Get AWS credentials out of boto connection instance
39bdf30 [Dan Osipov] Add S3 configuration parameters to the EC2 deploy scripts
2014-09-16 13:40:16 -07:00
Reynold Xin d428ac6a22 [SPARK-3540] Add reboot-slaves functionality to the ec2 script
Tested on a real cluster.

Author: Reynold Xin <rxin@apache.org>

Closes #2404 from rxin/ec2-reboot-slaves and squashes the following commits:

00a2dbd [Reynold Xin] Allow rebooting slaves.
2014-09-15 21:09:58 -07:00
Nicholas Chammas 0c681dd6b2 [EC2] don't duplicate default values
This PR makes two minor changes to the `spark-ec2` script:

1. The script's input parameter default values are duplicated into the help text. This is unnecessary. This PR replaces the duplicated info with the appropriate `optparse`  placeholder.
2. The default Spark version currently needs to be updated by hand during each release, which is known to be a faulty process. This PR places that default value in an easy-to-spot place.

Author: Nicholas Chammas <nicholas.chammas@gmail.com>

Closes #2290 from nchammas/spark-ec2-default-version and squashes the following commits:

0c6d3bb [Nicholas Chammas] don't duplicate default values
2014-09-06 14:39:29 -07:00
Nicholas Chammas 9422c4ee0e [SPARK-3361] Expand PEP 8 checks to include EC2 script and Python examples
This PR resolves [SPARK-3361](https://issues.apache.org/jira/browse/SPARK-3361) by expanding the PEP 8 checks to cover the remaining Python code base:
* The EC2 script
* All Python / PySpark examples

Author: Nicholas Chammas <nicholas.chammas@gmail.com>

Closes #2297 from nchammas/pep8-rulez and squashes the following commits:

1e5ac9a [Nicholas Chammas] PEP 8 fixes to Python examples
c3dbeff [Nicholas Chammas] PEP 8 fixes to EC2 script
65ef6e8 [Nicholas Chammas] expand PEP 8 checks
2014-09-05 23:08:54 -07:00
Reynold Xin 1725a1a5d1 [SPARK-3391][EC2] Support attaching up to 8 EBS volumes.
Please merge this at the same time as https://github.com/mesos/spark-ec2/pull/66

Author: Reynold Xin <rxin@apache.org>

Closes #2260 from rxin/ec2-ebs-vol and squashes the following commits:

b9527d9 [Reynold Xin] Removed io1 ebs type.
bf9c403 [Reynold Xin] Made EBS volume type configurable.
c8e25ea [Reynold Xin] Support up to 8 EBS volumes.
adf4f2e [Reynold Xin] Revert git repo change.
020c542 [Reynold Xin] [SPARK-3391] Support attaching more than 1 EBS volumes.
2014-09-04 23:34:58 -07:00
Patrick Wendell c64cc435e2 SPARK-3358: [EC2] Switch back to HVM instances for m3.X.
During regression tests of Spark 1.1 we discovered perf issues with
PVM instances when running PySpark. This reverts a change added in #1156
which changed the default type for m3 instances to PVM.

Author: Patrick Wendell <pwendell@gmail.com>

Closes #2244 from pwendell/ec2-hvm and squashes the following commits:

1342d7e [Patrick Wendell] SPARK-3358: [EC2] Switch back to HVM instances for m3.X.
2014-09-02 21:30:09 -07:00
Daniel Darabos 44d3a6a752 [SPARK-3342] Add SSDs to block device mapping
On `m3.2xlarge` instances the 2x80GB SSDs are inaccessible if not added to the block device mapping when the instance is created. They work when added with this patch. I have not tested this with other instance types, and I do not know much about this script and EC2 deployment in general. Maybe this code needs to depend on the instance type.

The requirement for this mapping is described in the AWS docs at:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html#InstanceStore_UsageScenarios

"For M3 instances, you must specify instance store volumes in the block
device mapping for the instance. When you launch an M3 instance, we
ignore any instance store volumes specified in the block device mapping
for the AMI."

Author: Daniel Darabos <darabos.daniel@gmail.com>

Closes #2081 from darabos/patch-1 and squashes the following commits:

1ceb2c8 [Daniel Darabos] Use %d string interpolation instead of {}.
a1854d7 [Daniel Darabos] Only specify ephemeral device mapping for M3.
e0d9e37 [Daniel Darabos] Create ephemeral device mapping based on get_num_disks().
6b116a6 [Daniel Darabos] Add SSDs to block device mapping
2014-09-01 22:16:42 -07:00
Vida Ha 7faf755ae4 Spark-3213 Fixes issue with spark-ec2 not detecting slaves created with "Launch More like this"
... copy the spark_cluster_tag from a spot instance requests over to the instances.

Author: Vida Ha <vida@databricks.com>

Closes #2163 from vidaha/vida/spark-3213 and squashes the following commits:

5070a70 [Vida Ha] Spark-3214 Fix issue with spark-ec2 not detecting slaves created with 'Launch More Like This' and using Spot Requests
2014-08-27 14:26:06 -07:00
Allan Douglas R. de Oliveira 5ac4093c9f SPARK-3259 - User data should be given to the master
Author: Allan Douglas R. de Oliveira <allan@chaordicsystems.com>

Closes #2162 from douglaz/user_data_master and squashes the following commits:

10d15f6 [Allan Douglas R. de Oliveira] Give user data also to the master
2014-08-27 12:43:22 -07:00
Allan Douglas R. de Oliveira cc40a709c0 SPARK-3180 - Better control of security groups
Adds the --authorized-address and --additional-security-group options as explained in the issue.

Author: Allan Douglas R. de Oliveira <allan@chaordicsystems.com>

Closes #2088 from douglaz/configurable_sg and squashes the following commits:

e3e48ca [Allan Douglas R. de Oliveira] Adds the option to specify the address authorized to access the SG and another option to provide an additional existing SG
2014-08-25 13:55:04 -07:00
Vida Ha 94053a7b76 SPARK-2333 - spark_ec2 script should allow option for existing security group
- Uses the name tag to identify machines in a cluster.
    - Allows overriding the security group name so it doesn't need to coincide with the cluster name.
    - Outputs the request id's of up to 10 pending spot instance requests.

Author: Vida Ha <vida@databricks.com>

Closes #1899 from vidaha/vida/ec2-reuse-security-group and squashes the following commits:

c80d5c3 [Vida Ha] wrap retries in a try catch block
b2989d5 [Vida Ha] SPARK-2333: spark_ec2 script should allow option for existing security group
2014-08-19 13:35:05 -07:00
Allan Douglas R. de Oliveira a0bcbc159e SPARK-2246: Add user-data option to EC2 scripts
Author: Allan Douglas R. de Oliveira <allan@chaordicsystems.com>

Closes #1186 from douglaz/spark_ec2_user_data and squashes the following commits:

94a36f9 [Allan Douglas R. de Oliveira] Added user data option to EC2 script
2014-08-03 10:27:58 -07:00
Basit Mustafa 7f87ab9813 Added t2 instance types
New t2 instance types require HVM amis, bailout assumption of pvm
causes failures when using t2 instance types.

Author: Basit Mustafa <basitmustafa@computes-things-for-basit.local>

Closes #1446 from 24601/master and squashes the following commits:

01fe128 [Basit Mustafa] Makin' it pretty
392a95e [Basit Mustafa] Added t2 instance types
2014-07-18 12:23:47 -07:00
Nicholas Chammas 369aa84e8f name ec2 instances and security groups consistently
Security groups created by `spark-ec2` do not prepend “spark-“ to the
name.

Since naming the instances themselves is new to `spark-ec2`, it’s better
to change that pattern to match the existing naming pattern for the
security groups, rather than the other way around.

Author: Nicholas Chammas <nicholas.chammas@gmail.com>
Author: nchammas <nicholas.chammas@gmail.com>

Closes #1344 from nchammas/master and squashes the following commits:

f7e4581 [Nicholas Chammas] unrelated pep8 fix
a36eed0 [Nicholas Chammas] name ec2 instances and security groups consistently
de7292a [nchammas] Merge pull request #4 from apache/master
2e4fe00 [nchammas] Merge pull request #3 from apache/master
89fde08 [nchammas] Merge pull request #2 from apache/master
69f6e22 [Nicholas Chammas] PEP8 fixes
2627247 [Nicholas Chammas] broke up lines before they hit 100 chars
6544b7e [Nicholas Chammas] [SPARK-2065] give launched instances names
69da6cf [nchammas] Merge pull request #1 from apache/master
2014-07-10 12:56:00 -07:00
Andrew Or 56e009d4f0 [EC2] Add default history server port to ec2 script
Right now I have to open it manually

Author: Andrew Or <andrewor14@gmail.com>

Closes #1296 from andrewor14/hist-serv-port and squashes the following commits:

8895a1f [Andrew Or] Add default history server port to ec2 script
2014-07-08 16:49:31 +09:00
Zichuan Ye 62d4a0fa99 Fixing AWS instance type information based upon current EC2 data
Fixed a problem in previous file in which some information regarding AWS instance types were wrong. Such information was updated base upon current AWS EC2 data.

Author: Zichuan Ye <jerry@tangentds.com>

Closes #1156 from jerry86/master and squashes the following commits:

ff36e95 [Zichuan Ye] Fixing AWS instance type information based upon current EC2 data
2014-06-26 15:21:29 -07:00
Jean-Martin Archer 9cb64b2c54 SPARK-2166 - Listing of instances to be terminated before the prompt
Will list the EC2 instances before detroying the cluster.
This was added because it can be scary to destroy EC2
instances without knowing which one will be impacted.

Author: Jean-Martin Archer <jeanmartin.archer@pulseenergy.com>

This patch had conflicts when merged, resolved by
Committer: Patrick Wendell <pwendell@gmail.com>

Closes #270 from j-martin/master and squashes the following commits:

826455f [Jean-Martin Archer] [SPARK-2611] Implementing recommendations
27b0a36 [Jean-Martin Archer] Listing of instances to be terminated before the prompt Will list the EC2 instances before detroying the cluster. This was added because it can be scary to destroy EC2 instances without knowing which one will be impacted.
2014-06-22 20:54:42 -07:00
Patrick Wendell b2ebf429e2 HOTFIX: bug caused by #941
This patch should have qualified the use of PIPE. This needs to be back ported into 0.9 and 1.0.

Author: Patrick Wendell <pwendell@gmail.com>

Closes #1108 from pwendell/hotfix and squashes the following commits:

711c58d [Patrick Wendell] HOTFIX: bug caused by #941
2014-06-17 15:09:24 -07:00
Anant 8cd04c3eec SPARK-1990: added compatibility for python 2.6 for ssh_read command
https://issues.apache.org/jira/browse/SPARK-1990

There were some posts on the lists that spark-ec2 does not work with Python 2.6. In addition, we should check the Python version at the top of the script and exit if it's too old

Author: Anant <anant.asty@gmail.com>

Closes #941 from anantasty/SPARK-1990 and squashes the following commits:

4ca441d [Anant] Implmented check_optput withinthe module to work with python 2.6
c6ed85c [Anant] added compatibility for python 2.6 for ssh_read command
2014-06-16 23:43:01 -07:00
Nicholas Chammas a2052a44f3 [SPARK-2065] give launched instances names
This update resolves [SPARK-2065](https://issues.apache.org/jira/browse/SPARK-2065). It gives launched EC2 instances descriptive names by using instance tags. Launched instances now show up in the EC2 console with these names.

I used `format()` with named parameters, which I believe is the recommended practice for string formatting in Python, but which doesn’t seem to be used elsewhere in the script.

Author: Nicholas Chammas <nicholas.chammas@gmail.com>
Author: nchammas <nicholas.chammas@gmail.com>

Closes #1043 from nchammas/master and squashes the following commits:

69f6e22 [Nicholas Chammas] PEP8 fixes
2627247 [Nicholas Chammas] broke up lines before they hit 100 chars
6544b7e [Nicholas Chammas] [SPARK-2065] give launched instances names
69da6cf [nchammas] Merge pull request #1 from apache/master
2014-06-10 21:49:08 -07:00
Varakhedi Sujeet 11ded3f66f SPARK-1790: Update EC2 scripts to support r3 instance types
Author: Varakhedi Sujeet <svarakhedi@gopivotal.com>

Closes #960 from sujeetv/ec2-r3 and squashes the following commits:

3cb9fd5 [Varakhedi Sujeet] SPARK-1790: Update EC2 scripts to support r3 instance
2014-06-04 16:02:23 -07:00
Aaron Davidson ab7c62d573 Update spark-ec2 scripts for 1.0.0 on master
The change was previously committed only to branch-1.0 as part of a34e6fda1d

Author: Aaron Davidson <aaron@databricks.com>

This patch had conflicts when merged, resolved by
Committer: Patrick Wendell <pwendell@gmail.com>

Closes #938 from aarondav/sparkec2 and squashes the following commits:

067cc31 [Aaron Davidson] Update spark-ec2 scripts for 1.0.0 on master
2014-06-03 22:33:04 -07:00
Reynold Xin eea3aab4f2 Made spark_ec2.py PEP8 compliant.
The change set is actually pretty small -- mostly whitespace changes. Admittedly this is a scary change due to the lack of tests to cover the ec2 scripts, and also because indentation actually impacts control flow in Python ...

Look at changes without whitespace diff here: https://github.com/apache/spark/pull/891/files?w=1

Author: Reynold Xin <rxin@apache.org>

Closes #891 from rxin/spark-ec2-pep8 and squashes the following commits:

ac1bf11 [Reynold Xin] Made spark_ec2.py PEP8 compliant.
2014-06-01 15:39:04 -07:00
Patrick Wendell c0ab85d732 Version bump of spark-ec2 scripts
This will allow us to change things in spark-ec2 related to the 1.0 release.

Author: Patrick Wendell <pwendell@gmail.com>

Closes #809 from pwendell/spark-ec2 and squashes the following commits:

59117fb [Patrick Wendell] Version bump of spark-ec2 scripts
2014-05-16 21:42:14 -07:00
msiddalingaiah bb2bb0cf6e Address SPARK-1717
I tested the change locally with Spark 0.9.1, but I can't test with 1.0.0 because there was no AMI for it at the time. It's a trivial fix, so it shouldn't cause any problems.

Author: msiddalingaiah <madhu@madhu.com>

Closes #641 from msiddalingaiah/master and squashes the following commits:

a4f7404 [msiddalingaiah] Address SPARK-1717
2014-05-04 21:59:10 -07:00
Allan Douglas R. de Oliveira bcb9b7fd4a EC2 script should exit with non-zero code on UsageError
This is specially import because some ssh errors are raised as UsageError, preventing an automated usage of the script from detecting the failure.

Author: Allan Douglas R. de Oliveira <allan@chaordicsystems.com>

Closes #638 from douglaz/ec2_exit_code_fix and squashes the following commits:

5915e6d [Allan Douglas R. de Oliveira] EC2 script should exit with non-zero code on UsageError
2014-05-04 20:36:51 -07:00
Allan Douglas R. de Oliveira 4669a84ab1 EC2 configurable workers
Added option to configure number of worker instances and to set SPARK_MASTER_OPTS

Depends on: https://github.com/mesos/spark-ec2/pull/46

Author: Allan Douglas R. de Oliveira <allan@chaordicsystems.com>

Closes #612 from douglaz/ec2_configurable_workers and squashes the following commits:

d6c5d65 [Allan Douglas R. de Oliveira] Added master opts parameter
6c34671 [Allan Douglas R. de Oliveira] Use number of worker instances as string on template
ba528b9 [Allan Douglas R. de Oliveira] Added SPARK_WORKER_INSTANCES parameter
2014-05-03 16:52:19 -07:00
Harvey Feng 7b4203ab4c Add Spark v0.9.1 to ec2 launch script and use it as the default
Mainly ported from branch-0.9.

Author: Harvey Feng <hyfeng224@gmail.com>

Closes #385 from harveyfeng/0.9.1-ec2 and squashes the following commits:

769ac2f [Harvey Feng] Add Spark v0.9.1 to ec2 launch script and use it as the default
2014-04-10 18:25:54 -07:00
CodingCat 3eb009f362 SPARK-1156: allow user to login into a cluster without slaves
Reported in https://spark-project.atlassian.net/browse/SPARK-1156

The current spark-ec2 script doesn't allow user to login to a cluster without slaves. One of the issues brought by this behaviour is that when all the worker died, the user cannot even login to the cluster for debugging, etc.

Author: CodingCat <zhunansjtu@gmail.com>

Closes #58 from CodingCat/SPARK-1156 and squashes the following commits:

104af07 [CodingCat] output ERROR to stderr
9a71769 [CodingCat] do not allow user to start 0-slave cluster
24a7c79 [CodingCat] allow user to login into a cluster without slaves
2014-03-05 21:47:34 -08:00
Patrick Wendell 1fd2bfd3dd Remove remaining references to incubation
This removes some loose ends not caught by the other (incubating -> tlp) patches. @markhamstra this updates the version as you mentioned earlier.

Author: Patrick Wendell <pwendell@gmail.com>

Closes #51 from pwendell/tlp and squashes the following commits:

d553b1b [Patrick Wendell] Remove remaining references to incubation
2014-03-02 01:00:16 -08:00
Xiangrui Meng b61435c7ff SPARK-1106: check key name and identity file before launch a cluster
I launched an EC2 cluster without providing a key name and an identity file. The error showed up after two minutes. It would be good to check those options before launch, given the fact that EC2 billing rounds up to hours.

JIRA: https://spark-project.atlassian.net/browse/SPARK-1106

Author: Xiangrui Meng <meng@databricks.com>

Closes #617 from mengxr/ec2 and squashes the following commits:

2dfb316 [Xiangrui Meng] check key name and identity file before launch a cluster
2014-02-18 18:30:02 -08:00
Shivaram Venkataraman 2414ed310e Merge pull request #598 from shivaram/master.
Update spark_ec2 to use 0.9.0 by default

Backports change from branch-0.9

Author: Shivaram Venkataraman <shivaram@eecs.berkeley.edu>

Closes #598 and squashes the following commits:

f6d3ed0 [Shivaram Venkataraman] Update spark_ec2 to use 0.9.0 by default Backports change from branch-0.9
2014-02-13 14:26:06 -08:00
Christian Lundgren 5fa53c02fc Add c3 instance types to Spark EC2
The number of disks for the c3 instance types taken from here: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html#StorageOnInstanceTypes

Author: Christian Lundgren <christian.lundgren@gameanalytics.com>

Closes #595 from chrisavl/branch-0.9 and squashes the following commits:

c8af5f9 [Christian Lundgren] Add c3 instance types to Spark EC2
(cherry picked from commit 19b4bb2b44)

Signed-off-by: Patrick Wendell <pwendell@gmail.com>
2014-02-13 12:46:47 -08:00
Shivaram Venkataraman 7c4e6e1bf1 Add i2 instance types to Spark EC2. 2014-01-10 12:44:55 -08:00
Prashant Sharma 59e8009b8d a few left over document change 2014-01-02 21:48:44 +05:30
Ewen Cheslack-Postava d17c142615 Force pseudo-tty allocation in spark-ec2 script.
ssh commands need the -t argument repeated twice if there is no local
tty, e.g. if the process running spark-ec2 uses nohup and the parent
process exits.
2013-12-16 08:09:37 -08:00
Ankur Dave bc9f7eacb9 Enable stopping and starting a spot cluster 2013-11-11 17:50:31 -08:00
Haoyuan Li 6f455553c9 expose UI port only 2013-11-10 16:00:09 -08:00