[SPARK-6636] Use public DNS hostname everywhere in spark_ec2.py

The spark_ec2.py script uses public_dns_name everywhere in the script except for testing ssh availability, which is done using the public ip address of the instances. This breaks the script for users who are deploying the cluster with a private-network-only security group. The fix is to use public_dns_name in the remaining place.

Author: Matt Aasted <aasted@twitch.tv>

Closes #5302 from aasted/master and squashes the following commits:

60cf6ee [Matt Aasted] [SPARK-6636] Use public DNS hostname everywhere in spark_ec2.py
This commit is contained in:
Matt Aasted 2015-04-06 23:50:48 -07:00 committed by Josh Rosen
parent a0846c4b63
commit 6f0d55d76f

View file

@ -809,7 +809,7 @@ def is_cluster_ssh_available(cluster_instances, opts):
Check if SSH is available on all the instances in a cluster.
"""
for i in cluster_instances:
if not is_ssh_available(host=i.ip_address, opts=opts):
if not is_ssh_available(host=i.public_dns_name, opts=opts):
return False
else:
return True