94d1f46fc4
JIRA issue: https://issues.apache.org/jira/browse/SPARK-2024 This PR is a followup to #455 and adds capabilities for saving PySpark RDDs using SequenceFile or any Hadoop OutputFormats. * Added RDD methods ```saveAsSequenceFile```, ```saveAsHadoopFile``` and ```saveAsHadoopDataset```, for both old and new MapReduce APIs. * Default converter for converting common data types to Writables. Users may specify custom converters to convert to desired data types. * No out-of-box support for reading/writing arrays, since ArrayWritable itself doesn't have a no-arg constructor for creating an empty instance upon reading. Users need to provide ArrayWritable subtypes. Custom converters for converting arrays to suitable ArrayWritable subtypes are also needed when writing. When reading, the default converter will convert any custom ArrayWritable subtypes to ```Object[]``` and they get pickled to Python tuples. * Added HBase and Cassandra output examples to show how custom output formats and converters can be used. cc MLnick mateiz ahirreddy pwendell Author: Kan Zhang <kzhang@apache.org> Closes #1338 from kanzhang/SPARK-2024 and squashes the following commits: c01e3ef [Kan Zhang] [SPARK-2024] code formatting 6591e37 [Kan Zhang] [SPARK-2024] renaming pickled -> pickledRDD d998ad6 [Kan Zhang] [SPARK-2024] refectoring to get method params below 10 57a7a5e [Kan Zhang] [SPARK-2024] correcting typo 75ca5bd [Kan Zhang] [SPARK-2024] Better type checking for batch serialized RDD 0bdec55 [Kan Zhang] [SPARK-2024] Refactoring newly added tests 9f39ff4 [Kan Zhang] [SPARK-2024] Adding 2 saveAsHadoopDataset tests 0c134f3 [Kan Zhang] [SPARK-2024] Test refactoring and adding couple unbatched cases 7a176df [Kan Zhang] [SPARK-2024] Add saveAsSequenceFile to PySpark |
||
---|---|---|
.. | ||
_layouts | ||
_plugins | ||
css | ||
img | ||
js | ||
_config.yml | ||
api.md | ||
bagel-programming-guide.md | ||
building-with-maven.md | ||
cluster-overview.md | ||
configuration.md | ||
contributing-to-spark.md | ||
ec2-scripts.md | ||
graphx-programming-guide.md | ||
hadoop-third-party-distributions.md | ||
hardware-provisioning.md | ||
index.md | ||
java-programming-guide.md | ||
job-scheduling.md | ||
mllib-basics.md | ||
mllib-clustering.md | ||
mllib-collaborative-filtering.md | ||
mllib-decision-tree.md | ||
mllib-dimensionality-reduction.md | ||
mllib-guide.md | ||
mllib-linear-methods.md | ||
mllib-naive-bayes.md | ||
mllib-optimization.md | ||
monitoring.md | ||
programming-guide.md | ||
python-programming-guide.md | ||
quick-start.md | ||
README.md | ||
running-on-mesos.md | ||
running-on-yarn.md | ||
scala-programming-guide.md | ||
security.md | ||
spark-standalone.md | ||
sql-programming-guide.md | ||
streaming-custom-receivers.md | ||
streaming-programming-guide.md | ||
submitting-applications.md | ||
tuning.md |
Welcome to the Spark documentation!
This readme will walk you through navigating and building the Spark documentation, which is included here with the Spark source code. You can also find documentation specific to release versions of Spark at http://spark.apache.org/documentation.html.
Read on to learn more about viewing documentation in plain text (i.e., markdown) or building the documentation yourself. Why build it yourself? So that you have the docs that corresponds to whichever version of Spark you currently have checked out of revision control.
Generating the Documentation HTML
We include the Spark documentation as part of the source (as opposed to using a hosted wiki, such as the github wiki, as the definitive documentation) to enable the documentation to evolve along with the source code and be captured by revision control (currently git). This way the code automatically includes the version of the documentation that is relevant regardless of which version or release you have checked out or downloaded.
In this directory you will find textfiles formatted using Markdown, with an ".md" suffix. You can read those text files directly if you want. Start with index.md.
The markdown code can be compiled to HTML using the Jekyll tool.
To use the jekyll
command, you will need to have Jekyll installed.
The easiest way to do this is via a Ruby Gem, see the
jekyll installation instructions.
If not already installed, you need to install kramdown
with sudo gem install kramdown
.
Execute jekyll
from the docs/
directory. Compiling the site with Jekyll will create a directory
called _site
containing index.html as well as the rest of the compiled files.
You can modify the default Jekyll build as follows:
# Skip generating API docs (which takes a while)
$ SKIP_SCALADOC=1 jekyll build
# Serve content locally on port 4000
$ jekyll serve --watch
# Build the site with extra features used on the live page
$ PRODUCTION=1 jekyll build
Pygments
We also use pygments (http://pygments.org) for syntax highlighting in documentation markdown pages,
so you will also need to install that (it requires Python) by running sudo easy_install Pygments
.
To mark a block of code in your markdown to be syntax highlighted by jekyll during the compile phase, use the following sytax:
{% highlight scala %}
// Your scala code goes here, you can replace scala with many other
// supported languages too.
{% endhighlight %}
API Docs (Scaladoc and Epydoc)
You can build just the Spark scaladoc by running sbt/sbt doc
from the SPARK_PROJECT_ROOT directory.
Similarly, you can build just the PySpark epydoc by running epydoc --config epydoc.conf
from the
SPARK_PROJECT_ROOT/pyspark directory. Documentation is only generated for classes that are listed as
public in __init__.py
.
When you run jekyll
in the docs
directory, it will also copy over the scaladoc for the various
Spark subprojects into the docs
directory (and then also into the _site
directory). We use a
jekyll plugin to run sbt/sbt doc
before building the site so if you haven't run it (recently) it
may take some time as it generates all of the scaladoc. The jekyll plugin also generates the
PySpark docs using epydoc.
NOTE: To skip the step of building and copying over the Scala and Python API docs, run SKIP_API=1 jekyll
.