## What changes were proposed in this pull request?
Fixes for R doc
## How was this patch tested?
N/A
Author: Felix Cheung <felixcheung_m@hotmail.com>
Closes#15589 from felixcheung/rdocmergefix.
(cherry picked from commit 0e0d83a597)
Signed-off-by: Felix Cheung <felixcheung@apache.org>
## What changes were proposed in this pull request?
NA date values are serialized as "NA" and NA time values are serialized as NaN from R. In the backend we did not have proper logic to deal with them. As a result we got an IllegalArgumentException for Date and wrong value for time. This PR adds support for deserializing NA as Date and Time.
## How was this patch tested?
* [x] TODO
Author: Hossein <hossein@databricks.com>
Closes#15421 from falaki/SPARK-17811.
## What changes were proposed in this pull request?
Add crossJoin and do not default to cross join if joinExpr is left out
## How was this patch tested?
unit test
Author: Felix Cheung <felixcheung_m@hotmail.com>
Closes#15559 from felixcheung/rcrossjoin.
## What changes were proposed in this pull request?
Fix for a bunch of test warnings that were added recently.
We need to investigate why warnings are not turning into errors.
```
Warnings -----------------------------------------------------------------------
1. createDataFrame uses files for large objects (test_sparkSQL.R#215) - Use Sepal_Length instead of Sepal.Length as column name
2. createDataFrame uses files for large objects (test_sparkSQL.R#215) - Use Sepal_Width instead of Sepal.Width as column name
3. createDataFrame uses files for large objects (test_sparkSQL.R#215) - Use Petal_Length instead of Petal.Length as column name
4. createDataFrame uses files for large objects (test_sparkSQL.R#215) - Use Petal_Width instead of Petal.Width as column name
Consider adding
importFrom("utils", "object.size")
to your NAMESPACE file.
```
## How was this patch tested?
unit tests
Author: Felix Cheung <felixcheung_m@hotmail.com>
Closes#15560 from felixcheung/rwarnings.
## What changes were proposed in this pull request?
If the R data structure that is being parallelized is larger than `INT_MAX` we use files to transfer data to JVM. The serialization protocol mimics Python pickling. This allows us to simply call `PythonRDD.readRDDFromFile` to create the RDD.
I tested this on my MacBook. Following code works with this patch:
```R
intMax <- .Machine$integer.max
largeVec <- 1:intMax
rdd <- SparkR:::parallelize(sc, largeVec, 2)
```
## How was this patch tested?
* [x] Unit tests
Author: Hossein <hossein@databricks.com>
Closes#15375 from falaki/SPARK-17790.
## What changes were proposed in this pull request?
SQLConf is session-scoped and mutable. However, we do have the requirement for a static SQL conf, which is global and immutable, e.g. the `schemaStringThreshold` in `HiveExternalCatalog`, the flag to enable/disable hive support, the global temp view database in https://github.com/apache/spark/pull/14897.
Actually we've already implemented static SQL conf implicitly via `SparkConf`, this PR just make it explicit and expose it to users, so that they can see the config value via SQL command or `SparkSession.conf`, and forbid users to set/unset static SQL conf.
## How was this patch tested?
new tests in SQLConfSuite
Author: Wenchen Fan <wenchen@databricks.com>
Closes#15295 from cloud-fan/global-conf.
## What changes were proposed in this pull request?
This PR includes the changes below:
- Support `mode`/`options` in `read.parquet`, `write.parquet`, `read.orc`, `write.orc`, `read.text`, `write.text`, `read.json` and `write.json` APIs
- Support other types (logical, numeric and string) as options for `write.df`, `read.df`, `read.parquet`, `write.parquet`, `read.orc`, `write.orc`, `read.text`, `write.text`, `read.json` and `write.json`
## How was this patch tested?
Unit tests in `test_sparkSQL.R`/ `utils.R`.
Author: hyukjinkwon <gurwls223@gmail.com>
Closes#15239 from HyukjinKwon/SPARK-17665.
## What changes were proposed in this pull request?
`write.df`/`read.df` API require path which is not actually always necessary in Spark. Currently, it only affects the datasources implementing `CreatableRelationProvider`. Currently, Spark currently does not have internal data sources implementing this but it'd affect other external datasources.
In addition we'd be able to use this way in Spark's JDBC datasource after https://github.com/apache/spark/pull/12601 is merged.
**Before**
- `read.df`
```r
> read.df(source = "json")
Error in dispatchFunc("read.df(path = NULL, source = NULL, schema = NULL, ...)", :
argument "x" is missing with no default
```
```r
> read.df(path = c(1, 2))
Error in dispatchFunc("read.df(path = NULL, source = NULL, schema = NULL, ...)", :
argument "x" is missing with no default
```
```r
> read.df(c(1, 2))
Error in invokeJava(isStatic = TRUE, className, methodName, ...) :
java.lang.ClassCastException: java.lang.Double cannot be cast to java.lang.String
at org.apache.spark.sql.execution.datasources.DataSource.hasMetadata(DataSource.scala:300)
at
...
In if (is.na(object)) { :
...
```
- `write.df`
```r
> write.df(df, source = "json")
Error in (function (classes, fdef, mtable) :
unable to find an inherited method for function ‘write.df’ for signature ‘"function", "missing"’
```
```r
> write.df(df, source = c(1, 2))
Error in (function (classes, fdef, mtable) :
unable to find an inherited method for function ‘write.df’ for signature ‘"SparkDataFrame", "missing"’
```
```r
> write.df(df, mode = TRUE)
Error in (function (classes, fdef, mtable) :
unable to find an inherited method for function ‘write.df’ for signature ‘"SparkDataFrame", "missing"’
```
**After**
- `read.df`
```r
> read.df(source = "json")
Error in loadDF : analysis error - Unable to infer schema for JSON at . It must be specified manually;
```
```r
> read.df(path = c(1, 2))
Error in f(x, ...) : path should be charactor, null or omitted.
```
```r
> read.df(c(1, 2))
Error in f(x, ...) : path should be charactor, null or omitted.
```
- `write.df`
```r
> write.df(df, source = "json")
Error in save : illegal argument - 'path' is not specified
```
```r
> write.df(df, source = c(1, 2))
Error in .local(df, path, ...) :
source should be charactor, null or omitted. It is 'parquet' by default.
```
```r
> write.df(df, mode = TRUE)
Error in .local(df, path, ...) :
mode should be charactor or omitted. It is 'error' by default.
```
## How was this patch tested?
Unit tests in `test_sparkSQL.R`
Author: hyukjinkwon <gurwls223@gmail.com>
Closes#15231 from HyukjinKwon/write-default-r.
## What changes were proposed in this pull request?
additional options were not passed down in write.df.
## How was this patch tested?
unit tests
falaki shivaram
Author: Felix Cheung <felixcheung_m@hotmail.com>
Closes#15010 from felixcheung/testreadoptions.
## What changes were proposed in this pull request?
Fixed bug in `dapplyCollect` by changing the `compute` function of `worker.R` to explicitly handle raw (binary) vectors.
cc shivaram
## How was this patch tested?
Unit tests
Author: Clark Fitzgerald <clarkfitzg@gmail.com>
Closes#14783 from clarkfitzg/SPARK-16785.
## What changes were proposed in this pull request?
Add sparkR.version() API.
```
> sparkR.version()
[1] "2.1.0-SNAPSHOT"
```
## How was this patch tested?
manual, unit tests
Author: Felix Cheung <felixcheung_m@hotmail.com>
Closes#14935 from felixcheung/rsparksessionversion.
## What changes were proposed in this pull request?
(Please fill in changes proposed in this fix)
registerTempTable(createDataFrame(iris), "iris")
str(collect(sql("select cast('1' as double) as x, cast('2' as decimal) as y from iris limit 5")))
'data.frame': 5 obs. of 2 variables:
$ x: num 1 1 1 1 1
$ y:List of 5
..$ : num 2
..$ : num 2
..$ : num 2
..$ : num 2
..$ : num 2
The problem is that spark returns `decimal(10, 0)` col type, instead of `decimal`. Thus, `decimal(10, 0)` is not handled correctly. It should be handled as "double".
As discussed in JIRA thread, we can have two potential fixes:
1). Scala side fix to add a new case when writing the object back; However, I can't use spark.sql.types._ in Spark core due to dependency issues. I don't find a way of doing type case match;
2). SparkR side fix: Add a helper function to check special type like `"decimal(10, 0)"` and replace it with `double`, which is PRIMITIVE type. This special helper is generic for adding new types handling in the future.
I open this PR to discuss pros and cons of both approaches. If we want to do Scala side fix, we need to find a way to match the case of DecimalType and StructType in Spark Core.
## How was this patch tested?
(Please explain how this patch was tested. E.g. unit tests, integration tests, manual tests)
Manual test:
> str(collect(sql("select cast('1' as double) as x, cast('2' as decimal) as y from iris limit 5")))
'data.frame': 5 obs. of 2 variables:
$ x: num 1 1 1 1 1
$ y: num 2 2 2 2 2
R Unit tests
Author: wm624@hotmail.com <wm624@hotmail.com>
Closes#14613 from wangmiao1981/type.
## What changes were proposed in this pull request?
Currently, `HiveContext` in SparkR is not being tested and always skipped.
This is because the initiation of `TestHiveContext` is being failed due to trying to load non-existing data paths (test tables).
This is introduced from https://github.com/apache/spark/pull/14005
This enables the tests with SparkR.
## How was this patch tested?
Manually,
**Before** (on Mac OS)
```
...
Skipped ------------------------------------------------------------------------
1. create DataFrame from RDD (test_sparkSQL.R#200) - Hive is not build with SparkSQL, skipped
2. test HiveContext (test_sparkSQL.R#1041) - Hive is not build with SparkSQL, skipped
3. read/write ORC files (test_sparkSQL.R#1748) - Hive is not build with SparkSQL, skipped
4. enableHiveSupport on SparkSession (test_sparkSQL.R#2480) - Hive is not build with SparkSQL, skipped
5. sparkJars tag in SparkContext (test_Windows.R#21) - This test is only for Windows, skipped
...
```
**After** (on Mac OS)
```
...
Skipped ------------------------------------------------------------------------
1. sparkJars tag in SparkContext (test_Windows.R#21) - This test is only for Windows, skipped
...
```
Please refer the tests below (on Windows)
- Before: https://ci.appveyor.com/project/HyukjinKwon/spark/build/45-test123
- After: https://ci.appveyor.com/project/HyukjinKwon/spark/build/46-test123
Author: hyukjinkwon <gurwls223@gmail.com>
Closes#14889 from HyukjinKwon/SPARK-17326.
## What changes were proposed in this pull request?
Rename RDD functions for now to avoid CRAN check warnings.
Some RDD functions are sharing generics with DataFrame functions (hence the problem) so after the renames we need to add new generics, for now.
## How was this patch tested?
unit tests
Author: Felix Cheung <felixcheung_m@hotmail.com>
Closes#14626 from felixcheung/rrddfunctions.
## What changes were proposed in this pull request?
Add an install_spark function to the SparkR package. User can run `install_spark()` to install Spark to a local directory within R.
Updates:
Several changes have been made:
- `install.spark()`
- check existence of tar file in the cache folder, and download only if not found
- trial priority of mirror_url look-up: user-provided -> preferred mirror site from apache website -> hardcoded backup option
- use 2.0.0
- `sparkR.session()`
- can install spark when not found in `SPARK_HOME`
## How was this patch tested?
Manual tests, running the check-cran.sh script added in #14173.
Author: Junyang Qian <junyangq@databricks.com>
Closes#14258 from junyangq/SPARK-16579.
## What changes were proposed in this pull request?
Fix R SparkSession init/stop, and warnings of reusing existing Spark Context
## How was this patch tested?
unit tests
shivaram
Author: Felix Cheung <felixcheung_m@hotmail.com>
Closes#14177 from felixcheung/rsessiontest.
## What changes were proposed in this pull request?
More tests
I don't think this is critical for Spark 2.0.0 RC, maybe Spark 2.0.1 or 2.1.0.
## How was this patch tested?
unit tests
shivaram dongjoon-hyun
Author: Felix Cheung <felixcheung_m@hotmail.com>
Closes#14206 from felixcheung/rroutetests.
## What changes were proposed in this pull request?
Fix function routing to work with and without namespace operator `SparkR::createDataFrame`
## How was this patch tested?
manual, unit tests
shivaram
Author: Felix Cheung <felixcheung_m@hotmail.com>
Closes#14195 from felixcheung/rroutedefault.
## What changes were proposed in this pull request?
Rename window.partitionBy and window.orderBy to windowPartitionBy and windowOrderBy to pass CRAN package check.
## How was this patch tested?
SparkR unit tests.
Author: Sun Rui <sunrui2016@gmail.com>
Closes#14192 from sun-rui/SPARK-16509.
## What changes were proposed in this pull request?
Minor documentation update for code example, code style, and missed reference to "sparkR.init"
## How was this patch tested?
manual
shivaram
Author: Felix Cheung <felixcheung_m@hotmail.com>
Closes#14178 from felixcheung/rcsvprogrammingguide.
## What changes were proposed in this pull request?
This PR prevents ERRORs when `summary(df)` is called for `SparkDataFrame` with not-numeric columns. This failure happens only in `SparkR`.
**Before**
```r
> df <- createDataFrame(faithful)
> df <- withColumn(df, "boolean", df$waiting==79)
> summary(df)
16/07/07 14:15:16 ERROR RBackendHandler: describe on 34 failed
Error in invokeJava(isStatic = FALSE, objId$id, methodName, ...) :
org.apache.spark.sql.AnalysisException: cannot resolve 'avg(`boolean`)' due to data type mismatch: function average requires numeric types, not BooleanType;
```
**After**
```r
> df <- createDataFrame(faithful)
> df <- withColumn(df, "boolean", df$waiting==79)
> summary(df)
SparkDataFrame[summary:string, eruptions:string, waiting:string]
```
## How was this patch tested?
Pass the Jenkins with a updated testcase.
Author: Dongjoon Hyun <dongjoon@apache.org>
Closes#14096 from dongjoon-hyun/SPARK-16425.
## What changes were proposed in this pull request?
Apply default "NA" as null string for R, like R read.csv na.string parameter.
https://stat.ethz.ch/R-manual/R-devel/library/utils/html/read.table.html
na.strings = "NA"
An user passing a csv file with NA value should get the same behavior with SparkR read.df(... source = "csv")
(couldn't open JIRA, will do that later)
## How was this patch tested?
unit tests
shivaram
Author: Felix Cheung <felixcheung_m@hotmail.com>
Closes#13984 from felixcheung/rcsvnastring.
## What changes were proposed in this pull request?
ORC test should be enabled only when HiveContext is available.
## How was this patch tested?
Manual.
```
$ R/run-tests.sh
...
1. create DataFrame from RDD (test_sparkSQL.R#200) - Hive is not build with SparkSQL, skipped
2. test HiveContext (test_sparkSQL.R#1021) - Hive is not build with SparkSQL, skipped
3. read/write ORC files (test_sparkSQL.R#1728) - Hive is not build with SparkSQL, skipped
4. enableHiveSupport on SparkSession (test_sparkSQL.R#2448) - Hive is not build with SparkSQL, skipped
5. sparkJars tag in SparkContext (test_Windows.R#21) - This test is only for Windows, skipped
DONE ===========================================================================
Tests passed.
```
Author: Dongjoon Hyun <dongjoon@apache.org>
Closes#14019 from dongjoon-hyun/SPARK-16233.
## What changes were proposed in this pull request?
gapplyCollect() does gapply() on a SparkDataFrame and collect the result back to R. Compared to gapply() + collect(), gapplyCollect() offers performance optimization as well as programming convenience, as no schema is needed to be provided.
This is similar to dapplyCollect().
## How was this patch tested?
Added test cases for gapplyCollect similar to dapplyCollect
Author: Narine Kokhlikyan <narine@slice.com>
Closes#13760 from NarineK/gapplyCollect.
## What changes were proposed in this pull request?
This PR implements `posexplode` table generating function. Currently, master branch raises the following exception for `map` argument. It's different from Hive.
**Before**
```scala
scala> sql("select posexplode(map('a', 1, 'b', 2))").show
org.apache.spark.sql.AnalysisException: No handler for Hive UDF ... posexplode() takes an array as a parameter; line 1 pos 7
```
**After**
```scala
scala> sql("select posexplode(map('a', 1, 'b', 2))").show
+---+---+-----+
|pos|key|value|
+---+---+-----+
| 0| a| 1|
| 1| b| 2|
+---+---+-----+
```
For `array` argument, `after` is the same with `before`.
```
scala> sql("select posexplode(array(1, 2, 3))").show
+---+---+
|pos|col|
+---+---+
| 0| 1|
| 1| 2|
| 2| 3|
+---+---+
```
## How was this patch tested?
Pass the Jenkins tests with newly added testcases.
Author: Dongjoon Hyun <dongjoon@apache.org>
Closes#13971 from dongjoon-hyun/SPARK-16289.
## What changes were proposed in this pull request?
Add unit tests for csv data for SPARKR
## How was this patch tested?
unit tests
Author: Felix Cheung <felixcheung_m@hotmail.com>
Closes#13904 from felixcheung/rcsv.
## What changes were proposed in this pull request?
Allowing truncate to a specific number of character is convenient at times, especially while operating from the REPL. Sometimes those last few characters make all the difference, and showing everything brings in whole lot of noise.
## How was this patch tested?
Existing tests. + 1 new test in DataFrameSuite.
For SparkR and pyspark, existing tests and manual testing.
Author: Prashant Sharma <prashsh1@in.ibm.com>
Author: Prashant Sharma <prashant@apache.org>
Closes#13839 from ScrapCodes/add_truncateTo_DF.show.
## What changes were proposed in this pull request?
Add `conf` method to get Runtime Config from SparkSession
## How was this patch tested?
unit tests, manual tests
This is how it works in sparkR shell:
```
SparkSession available as 'spark'.
> conf()
$hive.metastore.warehouse.dir
[1] "file:/opt/spark-2.0.0-bin-hadoop2.6/R/spark-warehouse"
$spark.app.id
[1] "local-1466749575523"
$spark.app.name
[1] "SparkR"
$spark.driver.host
[1] "10.0.2.1"
$spark.driver.port
[1] "45629"
$spark.executorEnv.LD_LIBRARY_PATH
[1] "$LD_LIBRARY_PATH:/usr/lib/R/lib:/usr/lib/x86_64-linux-gnu:/usr/lib/jvm/default-java/jre/lib/amd64/server"
$spark.executor.id
[1] "driver"
$spark.home
[1] "/opt/spark-2.0.0-bin-hadoop2.6"
$spark.master
[1] "local[*]"
$spark.sql.catalogImplementation
[1] "hive"
$spark.submit.deployMode
[1] "client"
> conf("spark.master")
$spark.master
[1] "local[*]"
```
Author: Felix Cheung <felixcheung_m@hotmail.com>
Closes#13885 from felixcheung/rconf.
## What changes were proposed in this pull request?
add union and deprecate unionAll, separate roxygen2 doc for rbind (since their usage and parameter lists are quite different)
`explode` is also deprecated - but seems like replacement is a combination of calls; not sure if we should deprecate it in SparkR, yet.
## How was this patch tested?
unit tests, manual checks for r doc
Author: Felix Cheung <felixcheung_m@hotmail.com>
Closes#13805 from felixcheung/runion.
## What changes were proposed in this pull request?
This PR adds `pivot` function to SparkR for API parity. Since this PR is based on https://github.com/apache/spark/pull/13295 , mhnatiuk should be credited for the work he did.
## How was this patch tested?
Pass the Jenkins tests (including new testcase.)
Author: Dongjoon Hyun <dongjoon@apache.org>
Closes#13786 from dongjoon-hyun/SPARK-15294.
## What changes were proposed in this pull request?
This PR adds `spark_partition_id` virtual column function in SparkR for API parity.
The following is just an example to illustrate a SparkR usage on a partitioned parquet table created by `spark.range(10).write.mode("overwrite").parquet("/tmp/t1")`.
```r
> collect(select(read.parquet('/tmp/t1'), c('id', spark_partition_id())))
id SPARK_PARTITION_ID()
1 3 0
2 4 0
3 8 1
4 9 1
5 0 2
6 1 3
7 2 4
8 5 5
9 6 6
10 7 7
```
## How was this patch tested?
Pass the Jenkins tests (including new testcase).
Author: Dongjoon Hyun <dongjoon@apache.org>
Closes#13768 from dongjoon-hyun/SPARK-16053.
## What changes were proposed in this pull request?
This issue adds `read.orc/write.orc` to SparkR for API parity.
## How was this patch tested?
Pass the Jenkins tests (with new testcases).
Author: Dongjoon Hyun <dongjoon@apache.org>
Closes#13763 from dongjoon-hyun/SPARK-16051.
## What changes were proposed in this pull request?
Add dropTempView and deprecate dropTempTable
## How was this patch tested?
unit tests
shivaram liancheng
Author: Felix Cheung <felixcheung_m@hotmail.com>
Closes#13753 from felixcheung/rdroptempview.
## What changes were proposed in this pull request?
This PR adds `monotonically_increasing_id` column function in SparkR for API parity.
After this PR, SparkR supports the followings.
```r
> df <- read.json("examples/src/main/resources/people.json")
> collect(select(df, monotonically_increasing_id(), df$name, df$age))
monotonically_increasing_id() name age
1 0 Michael NA
2 1 Andy 30
3 2 Justin 19
```
## How was this patch tested?
Pass the Jenkins tests (with added testcase).
Author: Dongjoon Hyun <dongjoon@apache.org>
Closes#13774 from dongjoon-hyun/SPARK-16059.
## What changes were proposed in this pull request?
This PR introduces the new SparkSession API for SparkR.
`sparkR.session.getOrCreate()` and `sparkR.session.stop()`
"getOrCreate" is a bit unusual in R but it's important to name this clearly.
SparkR implementation should
- SparkSession is the main entrypoint (vs SparkContext; due to limited functionality supported with SparkContext in SparkR)
- SparkSession replaces SQLContext and HiveContext (both a wrapper around SparkSession, and because of API changes, supporting all 3 would be a lot more work)
- Changes to SparkSession is mostly transparent to users due to SPARK-10903
- Full backward compatibility is expected - users should be able to initialize everything just in Spark 1.6.1 (`sparkR.init()`), but with deprecation warning
- Mostly cosmetic changes to parameter list - users should be able to move to `sparkR.session.getOrCreate()` easily
- An advanced syntax with named parameters (aka varargs aka "...") is supported; that should be closer to the Builder syntax that is in Scala/Python (which unfortunately does not work in R because it will look like this: `enableHiveSupport(config(config(master(appName(builder(), "foo"), "local"), "first", "value"), "next, "value"))`
- Updating config on an existing SparkSession is supported, the behavior is the same as Python, in which config is applied to both SparkContext and SparkSession
- Some SparkSession changes are not matched in SparkR, mostly because it would be breaking API change: `catalog` object, `createOrReplaceTempView`
- Other SQLContext workarounds are replicated in SparkR, eg. `tables`, `tableNames`
- `sparkR` shell is updated to use the SparkSession entrypoint (`sqlContext` is removed, just like with Scale/Python)
- All tests are updated to use the SparkSession entrypoint
- A bug in `read.jdbc` is fixed
TODO
- [x] Add more tests
- [ ] Separate PR - update all roxygen2 doc coding example
- [ ] Separate PR - update SparkR programming guide
## How was this patch tested?
unit tests, manual tests
shivaram sun-rui rxin
Author: Felix Cheung <felixcheung_m@hotmail.com>
Author: felixcheung <felixcheung_m@hotmail.com>
Closes#13635 from felixcheung/rsparksession.
## What changes were proposed in this pull request?
This PR adds `randomSplit` to SparkR for API parity.
## How was this patch tested?
Pass the Jenkins tests (with new testcase.)
Author: Dongjoon Hyun <dongjoon@apache.org>
Closes#13721 from dongjoon-hyun/SPARK-16005.
## What changes were proposed in this pull request?
Add registerTempTable to DataFrame with Deprecate
## How was this patch tested?
unit tests
shivaram liancheng
Author: Felix Cheung <felixcheung_m@hotmail.com>
Closes#13722 from felixcheung/rregistertemptable.
## What changes were proposed in this pull request?
This PR adds varargs-type `dropDuplicates` function to SparkR for API parity.
Refer to https://issues.apache.org/jira/browse/SPARK-15807, too.
## How was this patch tested?
Pass the Jenkins tests with new testcases.
Author: Dongjoon Hyun <dongjoon@apache.org>
Closes#13684 from dongjoon-hyun/SPARK-15908.
## What changes were proposed in this pull request?
gapply() applies an R function on groups grouped by one or more columns of a DataFrame, and returns a DataFrame. It is like GroupedDataSet.flatMapGroups() in the Dataset API.
Please, let me know what do you think and if you have any ideas to improve it.
Thank you!
## How was this patch tested?
Unit tests.
1. Primitive test with different column types
2. Add a boolean column
3. Compute average by a group
Author: Narine Kokhlikyan <narine.kokhlikyan@gmail.com>
Author: NarineK <narine.kokhlikyan@us.ibm.com>
Closes#12836 from NarineK/gapply2.
## What changes were proposed in this pull request?
This PR replaces `registerTempTable` with `createOrReplaceTempView` as a follow-up task of #12945.
## How was this patch tested?
Existing SparkR tests.
Author: Cheng Lian <lian@databricks.com>
Closes#13644 from liancheng/spark-15925-temp-view-for-r.
## What changes were proposed in this pull request?
In R 3.3.0, startsWith and endsWith are added. In this PR, I make the two work in SparkR.
1. Remove signature in generic.R
2. Add setMethod in column.R
3. Add unit tests
## How was this patch tested?
Manually test it through SparkR shell for both column data and string data, which are added into the unit test file.
Author: wm624@hotmail.com <wm624@hotmail.com>
Closes#13476 from wangmiao1981/start.
Eliminate the need to pass sqlContext to method since it is a singleton - and we don't want to support multiple contexts in a R session.
Changes are done in a back compat way with deprecation warning added. Method signature for S3 methods are added in a concise, clean approach such that in the next release the deprecated signature can be taken out easily/cleanly (just delete a few lines per method).
Custom method dispatch is implemented to allow for multiple JVM reference types that are all 'jobj' in R and to avoid having to add 30 new exports.
Author: felixcheung <felixcheung_m@hotmail.com>
Closes#9192 from felixcheung/rsqlcontext.
## What changes were proposed in this pull request?
in hive, `locate("aa", "aaa", 0)` would yield 0, `locate("aa", "aaa", 1)` would yield 1 and `locate("aa", "aaa", 2)` would yield 2, while in Spark, `locate("aa", "aaa", 0)` would yield 1, `locate("aa", "aaa", 1)` would yield 2 and `locate("aa", "aaa", 2)` would yield 0. This results from the different understanding of the third parameter in udf `locate`. It means the starting index and starts from 1, so when we use 0, the return would always be 0.
## How was this patch tested?
tested with modified `StringExpressionsSuite` and `StringFunctionsSuite`
Author: Daoyuan Wang <daoyuan.wang@intel.com>
Closes#13186 from adrian-wang/locate.
## What changes were proposed in this pull request?
dapplyCollect() applies an R function on each partition of a SparkDataFrame and collects the result back to R as a data.frame.
```
dapplyCollect(df, function(ldf) {...})
```
## How was this patch tested?
SparkR unit tests.
Author: Sun Rui <sunrui2016@gmail.com>
Closes#12989 from sun-rui/SPARK-15202.
This PR:
1. Implement WindowSpec S4 class.
2. Implement Window.partitionBy() and Window.orderBy() as utility functions to create WindowSpec objects.
3. Implement over() of Column class.
Author: Sun Rui <rui.sun@intel.com>
Author: Sun Rui <sunrui2016@gmail.com>
Closes#10094 from sun-rui/SPARK-11395.
## What changes were proposed in this pull request?
Implement repartitionByColumn on DataFrame.
This will allow us to run R functions on each partition identified by column groups with dapply() method.
## How was this patch tested?
Unit tests
Author: NarineK <narine.kokhlikyan@us.ibm.com>
Closes#12887 from NarineK/repartitionByColumns.
## What changes were proposed in this pull request?
Fix warnings and a failure in SparkR test cases with testthat version 1.0.1
## How was this patch tested?
SparkR unit test cases.
Author: Sun Rui <sunrui2016@gmail.com>
Closes#12867 from sun-rui/SPARK-15091.
## What changes were proposed in this pull request?
dapply() applies an R function on each partition of a DataFrame and returns a new DataFrame.
The function signature is:
dapply(df, function(localDF) {}, schema = NULL)
R function input: local data.frame from the partition on local node
R function output: local data.frame
Schema specifies the Row format of the resulting DataFrame. It must match the R function's output.
If schema is not specified, each partition of the result DataFrame will be serialized in R into a single byte array. Such resulting DataFrame can be processed by successive calls to dapply().
## How was this patch tested?
SparkR unit tests.
Author: Sun Rui <rui.sun@intel.com>
Author: Sun Rui <sunrui2016@gmail.com>
Closes#12493 from sun-rui/SPARK-12919.
Make the behavior of mutate more consistent with that in dplyr, besides support for replacing existing columns.
1. Throw error message when there are duplicated column names in the DataFrame being mutated.
2. when there are duplicated column names in specified columns by arguments, the last column of the same name takes effect.
Author: Sun Rui <rui.sun@intel.com>
Closes#10220 from sun-rui/SPARK-12235.