[SPARK-20192][SPARKR][DOC] SparkR migration guide to 2.2.0
## What changes were proposed in this pull request? Updating R Programming Guide ## How was this patch tested? manually Author: Felix Cheung <felixcheung_m@hotmail.com> Closes #17816 from felixcheung/r22relnote.
This commit is contained in:
parent
943a684b98
commit
d20a976e89
|
@ -644,3 +644,11 @@ You can inspect the search path in R with [`search()`](https://stat.ethz.ch/R-ma
|
||||||
## Upgrading to SparkR 2.1.0
|
## Upgrading to SparkR 2.1.0
|
||||||
|
|
||||||
- `join` no longer performs Cartesian Product by default, use `crossJoin` instead.
|
- `join` no longer performs Cartesian Product by default, use `crossJoin` instead.
|
||||||
|
|
||||||
|
## Upgrading to SparkR 2.2.0
|
||||||
|
|
||||||
|
- A `numPartitions` parameter has been added to `createDataFrame` and `as.DataFrame`. When splitting the data, the partition position calculation has been made to match the one in Scala.
|
||||||
|
- The method `createExternalTable` has been deprecated to be replaced by `createTable`. Either methods can be called to create external or managed table. Additional catalog methods have also been added.
|
||||||
|
- By default, derby.log is now saved to `tempdir()`. This will be created when instantiating the SparkSession with `enableHiveSupport` set to `TRUE`.
|
||||||
|
- `spark.lda` was not setting the optimizer correctly. It has been corrected.
|
||||||
|
- Several model summary outputs are updated to have `coefficients` as `matrix`. This includes `spark.logit`, `spark.kmeans`, `spark.glm`. Model summary outputs for `spark.gaussianMixture` have added log-likelihood as `loglik`.
|
||||||
|
|
Loading…
Reference in a new issue