spark-instrumented-optimizer/R
Sun Rui 45e3be5c13 [SPARK-10049] [SPARKR] Support collecting data of ArraryType in DataFrame.
this PR :
1.  Enhance reflection in RBackend. Automatically matching a Java array to Scala Seq when finding methods. Util functions like seq(), listToSeq() in R side can be removed, as they will conflict with the Serde logic that transferrs a Scala seq to R side.

2.  Enhance the SerDe to support transferring  a Scala seq to R side. Data of ArrayType in DataFrame
after collection is observed to be of Scala Seq type.

3.  Support ArrayType in createDataFrame().

Author: Sun Rui <rui.sun@intel.com>

Closes #8458 from sun-rui/SPARK-10049.
2015-09-10 12:21:13 -07:00
..
pkg [SPARK-10049] [SPARKR] Support collecting data of ArraryType in DataFrame. 2015-09-10 12:21:13 -07:00
.gitignore [SPARK-5654] Integrate SparkR 2015-04-08 22:45:40 -07:00
create-docs.sh [SPARK-10118] [SPARKR] [DOCS] Improve SparkR API docs for 1.5 release 2015-08-24 18:17:51 -07:00
DOCUMENTATION.md [SPARK-5654] Integrate SparkR 2015-04-08 22:45:40 -07:00
install-dev.bat [SPARK-9916] [BUILD] [SPARKR] removed left-over sparkr.zip copy/create commands from codebase 2015-08-12 20:59:38 -07:00
install-dev.sh [SPARK-8313] R Spark packages support 2015-08-04 18:20:12 -07:00
log4j.properties [SPARK-8350] [R] Log R unit test output to "unit-tests.log" 2015-06-15 08:16:22 -07:00
README.md Small update in the readme file 2015-07-06 13:28:07 -07:00
run-tests.sh [SPARK-9700] Pick default page size more intelligently. 2015-08-06 23:18:29 -07:00
WINDOWS.md [SPARK-5654] Integrate SparkR 2015-04-08 22:45:40 -07:00

R on Spark

SparkR is an R package that provides a light-weight frontend to use Spark from R.

SparkR development

Build Spark

Build Spark with Maven and include the -Psparkr profile to build the R package. For example to use the default Hadoop versions you can run

  build/mvn -DskipTests -Psparkr package

Running sparkR

You can start using SparkR by launching the SparkR shell with

./bin/sparkR

The sparkR script automatically creates a SparkContext with Spark by default in local mode. To specify the Spark master of a cluster for the automatically created SparkContext, you can run

./bin/sparkR --master "local[2]"

To set other options like driver memory, executor memory etc. you can pass in the spark-submit arguments to ./bin/sparkR

Using SparkR from RStudio

If you wish to use SparkR from RStudio or other R frontends you will need to set some environment variables which point SparkR to your Spark installation. For example

# Set this to where Spark is installed
Sys.setenv(SPARK_HOME="/Users/shivaram/spark")
# This line loads SparkR from the installed directory
.libPaths(c(file.path(Sys.getenv("SPARK_HOME"), "R", "lib"), .libPaths()))
library(SparkR)
sc <- sparkR.init(master="local")

Making changes to SparkR

The instructions for making contributions to Spark also apply to SparkR. If you only make R file changes (i.e. no Scala changes) then you can just re-install the R package using R/install-dev.sh and test your changes. Once you have made your changes, please include unit tests for them and run existing unit tests using the run-tests.sh script as described below.

Generating documentation

The SparkR documentation (Rd files and HTML files) are not a part of the source repository. To generate them you can run the script R/create-docs.sh. This script uses devtools and knitr to generate the docs and these packages need to be installed on the machine before using the script.

Examples, Unit tests

SparkR comes with several sample programs in the examples/src/main/r directory. To run one of them, use ./bin/sparkR <filename> <args>. For example:

./bin/sparkR examples/src/main/r/dataframe.R

You can also run the unit-tests for SparkR by running (you need to install the testthat package first):

R -e 'install.packages("testthat", repos="http://cran.us.r-project.org")'
./R/run-tests.sh

Running on YARN

The ./bin/spark-submit and ./bin/sparkR can also be used to submit jobs to YARN clusters. You will need to set YARN conf dir before doing so. For example on CDH you can run

export YARN_CONF_DIR=/etc/hadoop/conf
./bin/spark-submit --master yarn examples/src/main/r/dataframe.R