spark-instrumented-optimizer/R
Felix Cheung a3626ca333 [SPARK-19387][SPARKR] Tests do not run with SparkR source package in CRAN check
## What changes were proposed in this pull request?

- this is cause by changes in SPARK-18444, SPARK-18643 that we no longer install Spark when `master = ""` (default), but also related to SPARK-18449 since the real `master` value is not known at the time the R code in `sparkR.session` is run. (`master` cannot default to "local" since it could be overridden by spark-submit commandline or spark config)
- as a result, while running SparkR as a package in IDE is working fine, CRAN check is not as it is launching it via non-interactive script
- fix is to add check to the beginning of each test and vignettes; the same would also work by changing `sparkR.session()` to `sparkR.session(master = "local")` in tests, but I think being more explicit is better.

## How was this patch tested?

Tested this by reverting version to 2.1, since it needs to download the release jar with matching version. But since there are changes in 2.2 (specifically around SparkR ML) that are incompatible with 2.1, some tests are failing in this config. Will need to port this to branch-2.1 and retest with 2.1 release jar.

manually as:
```
# modify DESCRIPTION to revert version to 2.1.0
SPARK_HOME=/usr/spark R CMD build pkg
# run cran check without SPARK_HOME
R CMD check --as-cran SparkR_2.1.0.tar.gz
```

Author: Felix Cheung <felixcheung_m@hotmail.com>

Closes #16720 from felixcheung/rcranchecktest.
2017-02-14 13:51:27 -08:00
..
pkg [SPARK-19387][SPARKR] Tests do not run with SparkR source package in CRAN check 2017-02-14 13:51:27 -08:00
.gitignore [MINOR][R] add SparkR.Rcheck/ and SparkR_*.tar.gz to R/.gitignore 2016-08-21 10:31:25 -07:00
check-cran.sh [SPARK-18828][SPARKR] Refactor scripts for R 2017-01-16 13:49:12 -08:00
CRAN_RELEASE.md [SPARK-18590][SPARKR] build R source package when making distribution 2016-12-08 11:29:31 -08:00
create-docs.sh [SPARK-18828][SPARKR] Refactor scripts for R 2017-01-16 13:49:12 -08:00
create-rd.sh [SPARK-18828][SPARKR] Refactor scripts for R 2017-01-16 13:49:12 -08:00
DOCUMENTATION.md [MINOR][R][DOC] Fix R documentation generation instruction. 2016-06-05 13:03:02 -07:00
find-r.sh [SPARK-18828][SPARKR] Refactor scripts for R 2017-01-16 13:49:12 -08:00
install-dev.bat [SPARK-10500][SPARKR] sparkr.zip cannot be created if /R/lib is unwritable 2015-11-15 19:29:09 -08:00
install-dev.sh [SPARK-18828][SPARKR] Refactor scripts for R 2017-01-16 13:49:12 -08:00
install-source-package.sh [SPARK-18828][SPARKR] Refactor scripts for R 2017-01-16 13:49:12 -08:00
log4j.properties [SPARK-8350] [R] Log R unit test output to "unit-tests.log" 2015-06-15 08:16:22 -07:00
README.md [SPARK-18073][DOCS][WIP] Migrate wiki to spark.apache.org web site 2016-11-23 11:25:47 +00:00
run-tests.sh [SPARK-17674][SPARKR] check for warning in test output 2016-10-21 12:34:14 -07:00
WINDOWS.md [MINOR][SPARKR] Verbose build comment in WINDOWS.md rather than promoting default build without Hive 2016-08-31 09:06:23 -07:00

R on Spark

SparkR is an R package that provides a light-weight frontend to use Spark from R.

Installing sparkR

Libraries of sparkR need to be created in $SPARK_HOME/R/lib. This can be done by running the script $SPARK_HOME/R/install-dev.sh. By default the above script uses the system wide installation of R. However, this can be changed to any user installed location of R by setting the environment variable R_HOME the full path of the base directory where R is installed, before running install-dev.sh script. Example:

# where /home/username/R is where R is installed and /home/username/R/bin contains the files R and RScript
export R_HOME=/home/username/R
./install-dev.sh

SparkR development

Build Spark

Build Spark with Maven and include the -Psparkr profile to build the R package. For example to use the default Hadoop versions you can run

build/mvn -DskipTests -Psparkr package

Running sparkR

You can start using SparkR by launching the SparkR shell with

./bin/sparkR

The sparkR script automatically creates a SparkContext with Spark by default in local mode. To specify the Spark master of a cluster for the automatically created SparkContext, you can run

./bin/sparkR --master "local[2]"

To set other options like driver memory, executor memory etc. you can pass in the spark-submit arguments to ./bin/sparkR

Using SparkR from RStudio

If you wish to use SparkR from RStudio or other R frontends you will need to set some environment variables which point SparkR to your Spark installation. For example

# Set this to where Spark is installed
Sys.setenv(SPARK_HOME="/Users/username/spark")
# This line loads SparkR from the installed directory
.libPaths(c(file.path(Sys.getenv("SPARK_HOME"), "R", "lib"), .libPaths()))
library(SparkR)
sparkR.session()

Making changes to SparkR

The instructions for making contributions to Spark also apply to SparkR. If you only make R file changes (i.e. no Scala changes) then you can just re-install the R package using R/install-dev.sh and test your changes. Once you have made your changes, please include unit tests for them and run existing unit tests using the R/run-tests.sh script as described below.

Generating documentation

The SparkR documentation (Rd files and HTML files) are not a part of the source repository. To generate them you can run the script R/create-docs.sh. This script uses devtools and knitr to generate the docs and these packages need to be installed on the machine before using the script. Also, you may need to install these prerequisites. See also, R/DOCUMENTATION.md

Examples, Unit tests

SparkR comes with several sample programs in the examples/src/main/r directory. To run one of them, use ./bin/spark-submit <filename> <args>. For example:

./bin/spark-submit examples/src/main/r/dataframe.R

You can also run the unit tests for SparkR by running. You need to install the testthat package first:

R -e 'install.packages("testthat", repos="http://cran.us.r-project.org")'
./R/run-tests.sh

Running on YARN

The ./bin/spark-submit can also be used to submit jobs to YARN clusters. You will need to set YARN conf dir before doing so. For example on CDH you can run

export YARN_CONF_DIR=/etc/hadoop/conf
./bin/spark-submit --master yarn examples/src/main/r/dataframe.R