spark-instrumented-optimizer/R
Hyukjin Kwon e983ba8fce [SPARK-36631][R] Ask users if they want to download and install SparkR in non Spark scripts
### What changes were proposed in this pull request?

This PR proposes to ask users if they want to download and install SparkR when they install SparkR from CRAN.

`SPARKR_ASK_INSTALLATION` environment variable was added in case other notebook projects are affected.

### Why are the changes needed?

This is required for CRAN. Currently SparkR is removed: https://cran.r-project.org/web/packages/SparkR/index.html.
See also https://lists.apache.org/thread.html/r02b9046273a518e347dfe85f864d23d63d3502c6c1edd33df17a3b86%40%3Cdev.spark.apache.org%3E

### Does this PR introduce _any_ user-facing change?

Yes, `sparkR.session(...)` will ask if users want to download and install Spark package or not if they are in the plain R shell or `Rscript`.

### How was this patch tested?

**R shell**

Valid input (`n`):

```
> sparkR.session(master="local")
Spark not found in SPARK_HOME:
Will you download and install (or reuse if it exists) Spark package under the cache [/.../Caches/spark]? (y/n): n
```
```
Error in sparkCheckInstall(sparkHome, master, deployMode) :
  Please make sure Spark package is installed in this machine.
- If there is one, set the path in sparkHome parameter or environment variable SPARK_HOME.
- If not, you may run install.spark function to do the job.
```

Invalid input:

```
> sparkR.session(master="local")
Spark not found in SPARK_HOME:
Will you download and install (or reuse if it exists) Spark package under the cache [/.../Caches/spark]? (y/n): abc
```
```
Will you download and install (or reuse if it exists) Spark package under the cache [/.../Caches/spark]? (y/n):
```

Valid input (`y`):

```
> sparkR.session(master="local")
Will you download and install (or reuse if it exists) Spark package under the cache [/.../Caches/spark]? (y/n): y
Spark not found in the cache directory. Installation will start.
MirrorUrl not provided.
Looking for preferred site from apache website...
Preferred mirror site found: https://ftp.riken.jp/net/apache/spark
Downloading spark-3.3.0 for Hadoop 2.7 from:
- https://ftp.riken.jp/net/apache/spark/spark-3.3.0/spark-3.3.0-bin-hadoop2.7.tgz
trying URL 'https://ftp.riken.jp/net/apache/spark/spark-3.3.0/spark-3.3.0-bin-hadoop2.7.tgz'
...
```

**Rscript**

```
cat tmp.R
```
```
library(SparkR, lib.loc = c(file.path(".", "R", "lib")))
sparkR.session(master="local")
```

```
Rscript tmp.R
```

Valid input (`n`):

```
Spark not found in SPARK_HOME:
Will you download and install (or reuse if it exists) Spark package under the cache [/.../Caches/spark]? (y/n): n
```
```
Error in sparkCheckInstall(sparkHome, master, deployMode) :
  Please make sure Spark package is installed in this machine.
- If there is one, set the path in sparkHome parameter or environment variable SPARK_HOME.
- If not, you may run install.spark function to do the job.
Calls: sparkR.session -> sparkCheckInstall
```

Invalid input:

```
Spark not found in SPARK_HOME:
Will you download and install (or reuse if it exists) Spark package under the cache [/.../Caches/spark]? (y/n): abc
```
```
Will you download and install (or reuse if it exists) Spark package under the cache [/.../Caches/spark]? (y/n):
```

Valid input (`y`):

```
...
Spark not found in SPARK_HOME:
Will you download and install (or reuse if it exists) Spark package under the cache [/.../Caches/spark]? (y/n): y
Spark not found in the cache directory. Installation will start.
MirrorUrl not provided.
Looking for preferred site from apache website...
Preferred mirror site found: https://ftp.riken.jp/net/apache/spark
Downloading spark-3.3.0 for Hadoop 2.7 from:
...
```

`bin/sparkR` and `bin/spark-submit *.R` are not affected (tested).

Closes #33887 from HyukjinKwon/SPARK-36631.

Authored-by: Hyukjin Kwon <gurwls223@apache.org>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
2021-09-02 13:27:43 +09:00
..
pkg [SPARK-36631][R] Ask users if they want to download and install SparkR in non Spark scripts 2021-09-02 13:27:43 +09:00
.gitignore [MINOR][R] add SparkR.Rcheck/ and SparkR_*.tar.gz to R/.gitignore 2016-08-21 10:31:25 -07:00
check-cran.sh [SPARK-29339][R] Support Arrow 0.14 in vectoried dapply and gapply (test it in AppVeyor build) 2019-10-04 08:56:45 +09:00
CRAN_RELEASE.md Spelling r common dev mlib external project streaming resource managers python 2020-11-27 10:22:45 -06:00
create-docs.sh [MINOR][R] small tidying of sh scripts for R 2020-04-30 16:58:05 -07:00
create-rd.sh [MINOR][R] small tidying of sh scripts for R 2020-04-30 16:58:05 -07:00
DOCUMENTATION.md [SPARK-34643][R][DOCS] Use CRAN URL in canonical form 2021-03-05 10:08:11 -08:00
find-r.sh [SPARK-18828][SPARKR] Refactor scripts for R 2017-01-16 13:49:12 -08:00
install-dev.bat Spelling r common dev mlib external project streaming resource managers python 2020-11-27 10:22:45 -06:00
install-dev.sh [SPARK-22167][R][BUILD] sparkr packaging issue allow zinc 2017-10-02 11:46:51 -07:00
install-source-package.sh [SPARK-20123][BUILD] SPARK_HOME variable might have spaces in it(e.g. $SPARK… 2017-04-02 15:31:13 +01:00
log4j.properties [SPARK-8350] [R] Log R unit test output to "unit-tests.log" 2015-06-15 08:16:22 -07:00
README.md [SPARK-35180][BUILD] Allow to build SparkR with SBT 2021-04-22 20:56:33 +09:00
run-tests.sh [SPARK-33304][R][SQL] Add from_avro and to_avro functions to SparkR 2020-11-19 09:52:29 +09:00
WINDOWS.md [SPARK-32073][R] Drop R < 3.5 support 2020-06-24 11:05:27 +09:00

R on Spark

SparkR is an R package that provides a light-weight frontend to use Spark from R.

Installing sparkR

Libraries of sparkR need to be created in $SPARK_HOME/R/lib. This can be done by running the script $SPARK_HOME/R/install-dev.sh. By default the above script uses the system wide installation of R. However, this can be changed to any user installed location of R by setting the environment variable R_HOME the full path of the base directory where R is installed, before running install-dev.sh script. Example:

# where /home/username/R is where R is installed and /home/username/R/bin contains the files R and RScript
export R_HOME=/home/username/R
./install-dev.sh

SparkR development

Build Spark

Build Spark with Maven or SBT, and include the -Psparkr profile to build the R package. For example to use the default Hadoop versions you can run

# Maven
./build/mvn -DskipTests -Psparkr package

# SBT
./build/sbt -Psparkr package

Running sparkR

You can start using SparkR by launching the SparkR shell with

./bin/sparkR

The sparkR script automatically creates a SparkContext with Spark by default in local mode. To specify the Spark master of a cluster for the automatically created SparkContext, you can run

./bin/sparkR --master "local[2]"

To set other options like driver memory, executor memory etc. you can pass in the spark-submit arguments to ./bin/sparkR

Using SparkR from RStudio

If you wish to use SparkR from RStudio, please refer SparkR documentation.

Making changes to SparkR

The instructions for making contributions to Spark also apply to SparkR. If you only make R file changes (i.e. no Scala changes) then you can just re-install the R package using R/install-dev.sh and test your changes. Once you have made your changes, please include unit tests for them and run existing unit tests using the R/run-tests.sh script as described below.

Generating documentation

The SparkR documentation (Rd files and HTML files) are not a part of the source repository. To generate them you can run the script R/create-docs.sh. This script uses devtools and knitr to generate the docs and these packages need to be installed on the machine before using the script. Also, you may need to install these prerequisites. See also, R/DOCUMENTATION.md

Examples, Unit tests

SparkR comes with several sample programs in the examples/src/main/r directory. To run one of them, use ./bin/spark-submit <filename> <args>. For example:

./bin/spark-submit examples/src/main/r/dataframe.R

You can run R unit tests by following the instructions under Running R Tests.

Running on YARN

The ./bin/spark-submit can also be used to submit jobs to YARN clusters. You will need to set YARN conf dir before doing so. For example on CDH you can run

export YARN_CONF_DIR=/etc/hadoop/conf
./bin/spark-submit --master yarn examples/src/main/r/dataframe.R