547df97715
This PR simply points to the IntelliJ wiki page instead of also including IntelliJ notes in the docs. The intent however is to also update the wiki page with updated tips. This is the text I propose for the IntelliJ section on the wiki. I realize it omits some of the existing instructions on the wiki, about enabling Hive, but I think those are actually optional. ------ IntelliJ supports both Maven- and SBT-based projects. It is recommended, however, to import Spark as a Maven project. Choose "Import Project..." from the File menu, and select the `pom.xml` file in the Spark root directory. It is fine to leave all settings at their default values in the Maven import wizard, with two caveats. First, it is usually useful to enable "Import Maven projects automatically", sincchanges to the project structure will automatically update the IntelliJ project. Second, note the step that prompts you to choose active Maven build profiles. As documented above, some build configuration require specific profiles to be enabled. The same profiles that are enabled with `-P[profile name]` above may be enabled on this screen. For example, if developing for Hadoop 2.4 with YARN support, enable profiles `yarn` and `hadoop-2.4`. These selections can be changed later by accessing the "Maven Projects" tool window from the View menu, and expanding the Profiles section. "Rebuild Project" can fail the first time the project is compiled, because generate source files are not automatically generated. Try clicking the "Generate Sources and Update Folders For All Projects" button in the "Maven Projects" tool window to manually generate these sources. Compilation may fail with an error like "scalac: bad option: -P:/home/jakub/.m2/repository/org/scalamacros/paradise_2.10.4/2.0.1/paradise_2.10.4-2.0.1.jar". If so, go to Preferences > Build, Execution, Deployment > Scala Compiler and clear the "Additional compiler options" field. It will work then although the option will come back when the project reimports. Author: Sean Owen <sowen@cloudera.com> Closes #3952 from srowen/SPARK-5136 and squashes the following commits: f3baa66 [Sean Owen] Point to new IJ / Eclipse wiki link 016b7df [Sean Owen] Point to IntelliJ wiki page instead of also including IntelliJ notes in the docs |
||
---|---|---|
.. | ||
_layouts | ||
_plugins | ||
css | ||
img | ||
js | ||
_config.yml | ||
api.md | ||
bagel-programming-guide.md | ||
building-spark.md | ||
cluster-overview.md | ||
configuration.md | ||
contributing-to-spark.md | ||
ec2-scripts.md | ||
graphx-programming-guide.md | ||
hadoop-third-party-distributions.md | ||
hardware-provisioning.md | ||
index.md | ||
java-programming-guide.md | ||
job-scheduling.md | ||
ml-guide.md | ||
mllib-classification-regression.md | ||
mllib-clustering.md | ||
mllib-collaborative-filtering.md | ||
mllib-data-types.md | ||
mllib-decision-tree.md | ||
mllib-dimensionality-reduction.md | ||
mllib-ensembles.md | ||
mllib-feature-extraction.md | ||
mllib-guide.md | ||
mllib-linear-methods.md | ||
mllib-naive-bayes.md | ||
mllib-optimization.md | ||
mllib-statistics.md | ||
monitoring.md | ||
programming-guide.md | ||
python-programming-guide.md | ||
quick-start.md | ||
README.md | ||
running-on-mesos.md | ||
running-on-yarn.md | ||
scala-programming-guide.md | ||
security.md | ||
spark-standalone.md | ||
sql-programming-guide.md | ||
storage-openstack-swift.md | ||
streaming-custom-receivers.md | ||
streaming-flume-integration.md | ||
streaming-kafka-integration.md | ||
streaming-kinesis-integration.md | ||
streaming-programming-guide.md | ||
submitting-applications.md | ||
tuning.md |
Welcome to the Spark documentation!
This readme will walk you through navigating and building the Spark documentation, which is included here with the Spark source code. You can also find documentation specific to release versions of Spark at http://spark.apache.org/documentation.html.
Read on to learn more about viewing documentation in plain text (i.e., markdown) or building the documentation yourself. Why build it yourself? So that you have the docs that corresponds to whichever version of Spark you currently have checked out of revision control.
Generating the Documentation HTML
We include the Spark documentation as part of the source (as opposed to using a hosted wiki, such as the github wiki, as the definitive documentation) to enable the documentation to evolve along with the source code and be captured by revision control (currently git). This way the code automatically includes the version of the documentation that is relevant regardless of which version or release you have checked out or downloaded.
In this directory you will find textfiles formatted using Markdown, with an ".md" suffix. You can read those text files directly if you want. Start with index.md.
The markdown code can be compiled to HTML using the Jekyll tool.
Jekyll
and a few dependencies must be installed for this to work. We recommend
installing via the Ruby Gem dependency manager. Since the exact HTML output
varies between versions of Jekyll and its dependencies, we list specific versions here
in some cases:
$ sudo gem install jekyll
$ sudo gem install jekyll-redirect-from
Execute jekyll
from the docs/
directory. Compiling the site with Jekyll will create a directory
called _site
containing index.html as well as the rest of the compiled files.
You can modify the default Jekyll build as follows:
# Skip generating API docs (which takes a while)
$ SKIP_API=1 jekyll build
# Serve content locally on port 4000
$ jekyll serve --watch
# Build the site with extra features used on the live page
$ PRODUCTION=1 jekyll build
Pygments
We also use pygments (http://pygments.org) for syntax highlighting in documentation markdown pages,
so you will also need to install that (it requires Python) by running sudo pip install Pygments
.
To mark a block of code in your markdown to be syntax highlighted by jekyll during the compile phase, use the following sytax:
{% highlight scala %}
// Your scala code goes here, you can replace scala with many other
// supported languages too.
{% endhighlight %}
Sphinx
We use Sphinx to generate Python API docs, so you will need to install it by running
sudo pip install sphinx
.
API Docs (Scaladoc and Sphinx)
You can build just the Spark scaladoc by running build/sbt doc
from the SPARK_PROJECT_ROOT directory.
Similarly, you can build just the PySpark docs by running make html
from the
SPARK_PROJECT_ROOT/python/docs directory. Documentation is only generated for classes that are listed as
public in __init__.py
.
When you run jekyll
in the docs
directory, it will also copy over the scaladoc for the various
Spark subprojects into the docs
directory (and then also into the _site
directory). We use a
jekyll plugin to run build/sbt doc
before building the site so if you haven't run it (recently) it
may take some time as it generates all of the scaladoc. The jekyll plugin also generates the
PySpark docs Sphinx.
NOTE: To skip the step of building and copying over the Scala and Python API docs, run SKIP_API=1 jekyll
.