spark-instrumented-optimizer/R
hyukjinkwon 08e0d033b4 [SPARK-21093][R] Terminate R's worker processes in the parent of R's daemon to prevent a leak
## What changes were proposed in this pull request?

This is a retry for #18320. This PR was reverted due to unexpected test failures with -10 error code.

I was unable to reproduce in MacOS, CentOS and Ubuntu but only in Jenkins. So, the tests proceeded to verify this and revert the past try here - https://github.com/apache/spark/pull/18456

This new approach was tested in https://github.com/apache/spark/pull/18463.

**Test results**:

- With the part of suspicious change in the past try (466325d3fd)

  Tests ran 4 times and 2 times passed and 2 time failed.

- Without the part of suspicious change in the past try (466325d3fd)

  Tests ran 5 times and they all passed.

- With this new approach (0a7589c09f)

  Tests ran 5 times and they all passed.

It looks the cause is as below (see 466325d3fd):

```diff
+ exitCode <- 1
...
+   data <- parallel:::readChild(child)
+   if (is.raw(data)) {
+     if (unserialize(data) == exitCode) {
      ...
+     }
+   }

...

- parallel:::mcexit(0L)
+ parallel:::mcexit(0L, send = exitCode)
```

Two possibilities I think

 - `parallel:::mcexit(.. , send = exitCode)`

   https://stat.ethz.ch/R-manual/R-devel/library/parallel/html/mcfork.html

   > It sends send to the master (unless NULL) and then shuts down the child process.

   However, it looks possible that the parent attemps to terminate the child right after getting our custom exit code. So, the child gets terminated between "send" and "shuts down", failing to exit properly.

 - A bug between `parallel:::mcexit(..., send = ...)` and `parallel:::readChild`.

**Proposal**:

To resolve this, I simply decided to avoid both possibilities with this new approach here (9ff89a7859). To support this idea, I explained with some quotation of the documentation as below:

https://stat.ethz.ch/R-manual/R-devel/library/parallel/html/mcfork.html

> `readChild` and `readChildren` return a raw vector with a "pid" attribute if data were available, an integer vector of length one with the process ID if a child terminated or `NULL` if the child no longer exists (no children at all for `readChildren`).

`readChild` returns "an integer vector of length one with the process ID if a child terminated" so we can check if it is `integer` and the same selected "process ID". I believe this makes sure that the children are exited.

In case that children happen to send any data manually to parent (which is why we introduced the suspicious part of the change (466325d3fd)), this should be raw bytes and will be discarded (and then will try to read the next and check if it is `integer` in the next loop).

## How was this patch tested?

Manual tests and Jenkins tests.

Author: hyukjinkwon <gurwls223@gmail.com>

Closes #18465 from HyukjinKwon/SPARK-21093-retry-1.
2017-07-08 14:24:37 -07:00
..
pkg [SPARK-21093][R] Terminate R's worker processes in the parent of R's daemon to prevent a leak 2017-07-08 14:24:37 -07:00
.gitignore [MINOR][R] add SparkR.Rcheck/ and SparkR_*.tar.gz to R/.gitignore 2016-08-21 10:31:25 -07:00
check-cran.sh [SPARK-20123][BUILD] SPARK_HOME variable might have spaces in it(e.g. $SPARK… 2017-04-02 15:31:13 +01:00
CRAN_RELEASE.md [SPARK-18590][SPARKR] build R source package when making distribution 2016-12-08 11:29:31 -08:00
create-docs.sh [SPARK-20123][BUILD] SPARK_HOME variable might have spaces in it(e.g. $SPARK… 2017-04-02 15:31:13 +01:00
create-rd.sh [SPARK-20123][BUILD] SPARK_HOME variable might have spaces in it(e.g. $SPARK… 2017-04-02 15:31:13 +01:00
DOCUMENTATION.md [MINOR][R][DOC] Fix R documentation generation instruction. 2016-06-05 13:03:02 -07:00
find-r.sh [SPARK-18828][SPARKR] Refactor scripts for R 2017-01-16 13:49:12 -08:00
install-dev.bat [SPARK-10500][SPARKR] sparkr.zip cannot be created if /R/lib is unwritable 2015-11-15 19:29:09 -08:00
install-dev.sh [SPARK-20123][BUILD] SPARK_HOME variable might have spaces in it(e.g. $SPARK… 2017-04-02 15:31:13 +01:00
install-source-package.sh [SPARK-20123][BUILD] SPARK_HOME variable might have spaces in it(e.g. $SPARK… 2017-04-02 15:31:13 +01:00
log4j.properties [SPARK-8350] [R] Log R unit test output to "unit-tests.log" 2015-06-15 08:16:22 -07:00
README.md [MINOR][DOCS] Improve Running R Tests docs 2017-06-16 11:03:54 +01:00
run-tests.sh [SPARK-20543][SPARKR] skip tests when running on CRAN 2017-05-03 21:40:18 -07:00
WINDOWS.md [MINOR][DOCS] Improve Running R Tests docs 2017-06-16 11:03:54 +01:00

R on Spark

SparkR is an R package that provides a light-weight frontend to use Spark from R.

Installing sparkR

Libraries of sparkR need to be created in $SPARK_HOME/R/lib. This can be done by running the script $SPARK_HOME/R/install-dev.sh. By default the above script uses the system wide installation of R. However, this can be changed to any user installed location of R by setting the environment variable R_HOME the full path of the base directory where R is installed, before running install-dev.sh script. Example:

# where /home/username/R is where R is installed and /home/username/R/bin contains the files R and RScript
export R_HOME=/home/username/R
./install-dev.sh

SparkR development

Build Spark

Build Spark with Maven and include the -Psparkr profile to build the R package. For example to use the default Hadoop versions you can run

build/mvn -DskipTests -Psparkr package

Running sparkR

You can start using SparkR by launching the SparkR shell with

./bin/sparkR

The sparkR script automatically creates a SparkContext with Spark by default in local mode. To specify the Spark master of a cluster for the automatically created SparkContext, you can run

./bin/sparkR --master "local[2]"

To set other options like driver memory, executor memory etc. you can pass in the spark-submit arguments to ./bin/sparkR

Using SparkR from RStudio

If you wish to use SparkR from RStudio or other R frontends you will need to set some environment variables which point SparkR to your Spark installation. For example

# Set this to where Spark is installed
Sys.setenv(SPARK_HOME="/Users/username/spark")
# This line loads SparkR from the installed directory
.libPaths(c(file.path(Sys.getenv("SPARK_HOME"), "R", "lib"), .libPaths()))
library(SparkR)
sparkR.session()

Making changes to SparkR

The instructions for making contributions to Spark also apply to SparkR. If you only make R file changes (i.e. no Scala changes) then you can just re-install the R package using R/install-dev.sh and test your changes. Once you have made your changes, please include unit tests for them and run existing unit tests using the R/run-tests.sh script as described below.

Generating documentation

The SparkR documentation (Rd files and HTML files) are not a part of the source repository. To generate them you can run the script R/create-docs.sh. This script uses devtools and knitr to generate the docs and these packages need to be installed on the machine before using the script. Also, you may need to install these prerequisites. See also, R/DOCUMENTATION.md

Examples, Unit tests

SparkR comes with several sample programs in the examples/src/main/r directory. To run one of them, use ./bin/spark-submit <filename> <args>. For example:

./bin/spark-submit examples/src/main/r/dataframe.R

You can run R unit tests by following the instructions under Running R Tests.

Running on YARN

The ./bin/spark-submit can also be used to submit jobs to YARN clusters. You will need to set YARN conf dir before doing so. For example on CDH you can run

export YARN_CONF_DIR=/etc/hadoop/conf
./bin/spark-submit --master yarn examples/src/main/r/dataframe.R