spark-instrumented-optimizer/R/pkg
hyukjinkwon 08e0d033b4 [SPARK-21093][R] Terminate R's worker processes in the parent of R's daemon to prevent a leak
## What changes were proposed in this pull request?

This is a retry for #18320. This PR was reverted due to unexpected test failures with -10 error code.

I was unable to reproduce in MacOS, CentOS and Ubuntu but only in Jenkins. So, the tests proceeded to verify this and revert the past try here - https://github.com/apache/spark/pull/18456

This new approach was tested in https://github.com/apache/spark/pull/18463.

**Test results**:

- With the part of suspicious change in the past try (466325d3fd)

  Tests ran 4 times and 2 times passed and 2 time failed.

- Without the part of suspicious change in the past try (466325d3fd)

  Tests ran 5 times and they all passed.

- With this new approach (0a7589c09f)

  Tests ran 5 times and they all passed.

It looks the cause is as below (see 466325d3fd):

```diff
+ exitCode <- 1
...
+   data <- parallel:::readChild(child)
+   if (is.raw(data)) {
+     if (unserialize(data) == exitCode) {
      ...
+     }
+   }

...

- parallel:::mcexit(0L)
+ parallel:::mcexit(0L, send = exitCode)
```

Two possibilities I think

 - `parallel:::mcexit(.. , send = exitCode)`

   https://stat.ethz.ch/R-manual/R-devel/library/parallel/html/mcfork.html

   > It sends send to the master (unless NULL) and then shuts down the child process.

   However, it looks possible that the parent attemps to terminate the child right after getting our custom exit code. So, the child gets terminated between "send" and "shuts down", failing to exit properly.

 - A bug between `parallel:::mcexit(..., send = ...)` and `parallel:::readChild`.

**Proposal**:

To resolve this, I simply decided to avoid both possibilities with this new approach here (9ff89a7859). To support this idea, I explained with some quotation of the documentation as below:

https://stat.ethz.ch/R-manual/R-devel/library/parallel/html/mcfork.html

> `readChild` and `readChildren` return a raw vector with a "pid" attribute if data were available, an integer vector of length one with the process ID if a child terminated or `NULL` if the child no longer exists (no children at all for `readChildren`).

`readChild` returns "an integer vector of length one with the process ID if a child terminated" so we can check if it is `integer` and the same selected "process ID". I believe this makes sure that the children are exited.

In case that children happen to send any data manually to parent (which is why we introduced the suspicious part of the change (466325d3fd)), this should be raw bytes and will be discarded (and then will try to read the next and check if it is `integer` in the next loop).

## How was this patch tested?

Manual tests and Jenkins tests.

Author: hyukjinkwon <gurwls223@gmail.com>

Closes #18465 from HyukjinKwon/SPARK-21093-retry-1.
2017-07-08 14:24:37 -07:00
..
inst [SPARK-21093][R] Terminate R's worker processes in the parent of R's daemon to prevent a leak 2017-07-08 14:24:37 -07:00
R [SPARK-20456][DOCS] Add examples for functions collection for pyspark 2017-07-07 23:59:34 -07:00
src-native [SPARK-6811] Copy SparkR lib in make-distribution.sh 2015-05-23 00:04:01 -07:00
tests [SPARK-20307][SPARKR] SparkR: pass on setHandleInvalid to spark.mllib functions that use StringIndexer 2017-07-07 23:51:32 -07:00
vignettes [SPARK-20849][DOC][SPARKR] Document R DecisionTree 2017-05-25 23:00:50 -07:00
.lintr [SPARK-20278][R] Disable 'multiple_dots_linter' lint rule that is against project's code style 2017-04-16 11:27:27 -07:00
.Rbuildignore [SPARK-20877][SPARKR][FOLLOWUP] clean up after test move 2017-06-11 03:00:44 -07:00
DESCRIPTION [MINOR] Bump SparkR and PySpark version to 2.3.0. 2017-06-19 11:13:03 +01:00
NAMESPACE [SPARK-21149][R] Add job description API for R 2017-06-23 09:59:24 -07:00