spark-instrumented-optimizer/docs/storage-openstack-swift.md
Steve Loughran 2cf83c4783 [SPARK-7481][BUILD] Add spark-hadoop-cloud module to pull in object store access.
## What changes were proposed in this pull request?

Add a new `spark-hadoop-cloud` module and maven profile to pull in object store support from `hadoop-openstack`, `hadoop-aws` and `hadoop-azure` (Hadoop 2.7+) JARs, along with their dependencies, fixing up the dependencies so that everything works, in particular Jackson.

It restores `s3n://` access to S3, adds its `s3a://` replacement, OpenStack `swift://` and azure `wasb://`.

There's a documentation page, `cloud_integration.md`, which covers the basic details of using Spark with object stores, referring the reader to the supplier's own documentation, with specific warnings on security and the possible mismatch between a store's behavior and that of a filesystem. In particular, users are advised be very cautious when trying to use an object store as the destination of data, and to consult the documentation of the storage supplier and the connector.

(this is the successor to #12004; I can't re-open it)

## How was this patch tested?

Downstream tests exist in [https://github.com/steveloughran/spark-cloud-examples/tree/master/cloud-examples](https://github.com/steveloughran/spark-cloud-examples/tree/master/cloud-examples)

Those verify that the dependencies are sufficient to allow downstream applications to work with s3a, azure wasb and swift storage connectors, and perform basic IO & dataframe operations thereon. All seems well.

Manually clean build & verify that assembly contains the relevant aws-* hadoop-* artifacts on Hadoop 2.6; azure on a hadoop-2.7 profile.

SBT build: `build/sbt -Phadoop-cloud -Phadoop-2.7 package`
maven build `mvn install -Phadoop-cloud -Phadoop-2.7`

This PR *does not* update `dev/deps/spark-deps-hadoop-2.7` or `dev/deps/spark-deps-hadoop-2.6`, because unless the hadoop-cloud profile is enabled, no extra JARs show up in the dependency list. The dependency check in Jenkins isn't setting the property, so the new JARs aren't visible.

Author: Steve Loughran <stevel@apache.org>
Author: Steve Loughran <stevel@hortonworks.com>

Closes #17834 from steveloughran/cloud/SPARK-7481-current.
2017-05-07 10:15:31 +01:00

4.5 KiB

layout title
global Accessing OpenStack Swift from Spark

Spark's support for Hadoop InputFormat allows it to process data in OpenStack Swift using the same URI formats as in Hadoop. You can specify a path in Swift as input through a URI of the form swift://container.PROVIDER/path. You will also need to set your Swift security credentials, through core-site.xml or via SparkContext.hadoopConfiguration. The current Swift driver requires Swift to use the Keystone authentication method, or its Rackspace-specific predecessor.

Configuring Swift for Better Data Locality

Although not mandatory, it is recommended to configure the proxy server of Swift with list_endpoints to have better data locality. More information is available here.

Dependencies

The Spark application should include hadoop-openstack dependency, which can be done by including the hadoop-cloud module for the specific version of spark used. For example, for Maven support, add the following to the pom.xml file:

{% highlight xml %} ... org.apache.spark hadoop-cloud_2.11 ${spark.version} ... {% endhighlight %}

Configuration Parameters

Create core-site.xml and place it inside Spark's conf directory. The main category of parameters that should be configured are the authentication parameters required by Keystone.

The following table contains a list of Keystone mandatory parameters. PROVIDER can be any (alphanumeric) name.

Property NameMeaningRequired
fs.swift.service.PROVIDER.auth.url Keystone Authentication URL Mandatory
fs.swift.service.PROVIDER.auth.endpoint.prefix Keystone endpoints prefix Optional
fs.swift.service.PROVIDER.tenant Tenant Mandatory
fs.swift.service.PROVIDER.username Username Mandatory
fs.swift.service.PROVIDER.password Password Mandatory
fs.swift.service.PROVIDER.http.port HTTP port Mandatory
fs.swift.service.PROVIDER.region Keystone region Mandatory
fs.swift.service.PROVIDER.public Indicates whether to use the public (off cloud) or private (in cloud; no transfer fees) endpoints Mandatory

For example, assume PROVIDER=SparkTest and Keystone contains user tester with password testing defined for tenant test. Then core-site.xml should include:

{% highlight xml %} fs.swift.service.SparkTest.auth.url http://127.0.0.1:5000/v2.0/tokens fs.swift.service.SparkTest.auth.endpoint.prefix endpoints fs.swift.service.SparkTest.http.port 8080 fs.swift.service.SparkTest.region RegionOne fs.swift.service.SparkTest.public true fs.swift.service.SparkTest.tenant test fs.swift.service.SparkTest.username tester fs.swift.service.SparkTest.password testing {% endhighlight %}

Notice that fs.swift.service.PROVIDER.tenant, fs.swift.service.PROVIDER.username, fs.swift.service.PROVIDER.password contains sensitive information and keeping them in core-site.xml is not always a good approach. We suggest to keep those parameters in core-site.xml for testing purposes when running Spark via spark-shell. For job submissions they should be provided via sparkContext.hadoopConfiguration.