7d0743b493
### Why is this change being proposed? This patch adds support for a new "product" aggregation function in `sql.functions` which multiplies-together all values in an aggregation group. This is likely to be useful in statistical applications which involve combining probabilities, or financial applications that involve combining cumulative interest rates, but is also a versatile mathematical operation of similar status to `sum` or `stddev`. Other users [have noted](https://stackoverflow.com/questions/52991640/cumulative-product-in-spark) the absence of such a function in current releases of Spark. This function is both much more concise than an expression of the form `exp(sum(log(...)))`, and avoids awkward edge-cases associated with some values being zero or negative, as well as being less computationally costly. ### Does this PR introduce _any_ user-facing change? No - only adds new function. ### How was this patch tested? Built-in tests have been added for the new `catalyst.expressions.aggregate.Product` class and its invocation via the (scala) `sql.functions.product` function. The latter, and the PySpark wrapper have also been manually tested in spark-shell and pyspark sessions. The SparkR wrapper is currently untested, and may need separate validation (I'm not an "R" user myself). An illustration of the new functionality, within PySpark is as follows: ``` import pyspark.sql.functions as pf, pyspark.sql.window as pw df = sqlContext.range(1, 17).toDF("x") win = pw.Window.partitionBy(pf.lit(1)).orderBy(pf.col("x")) df.withColumn("factorial", pf.product("x").over(win)).show(20, False) +---+---------------+ |x |factorial | +---+---------------+ |1 |1.0 | |2 |2.0 | |3 |6.0 | |4 |24.0 | |5 |120.0 | |6 |720.0 | |7 |5040.0 | |8 |40320.0 | |9 |362880.0 | |10 |3628800.0 | |11 |3.99168E7 | |12 |4.790016E8 | |13 |6.2270208E9 | |14 |8.71782912E10 | |15 |1.307674368E12 | |16 |2.0922789888E13| +---+---------------+ ``` Closes #30745 from rwpenney/feature/agg-product. Lead-authored-by: Richard Penney <rwp@rwpenney.uk> Co-authored-by: Richard Penney <rwpenney@users.noreply.github.com> Signed-off-by: HyukjinKwon <gurwls223@apache.org> |
||
---|---|---|
.. | ||
pkg | ||
.gitignore | ||
check-cran.sh | ||
CRAN_RELEASE.md | ||
create-docs.sh | ||
create-rd.sh | ||
DOCUMENTATION.md | ||
find-r.sh | ||
install-dev.bat | ||
install-dev.sh | ||
install-source-package.sh | ||
log4j.properties | ||
README.md | ||
run-tests.sh | ||
WINDOWS.md |
R on Spark
SparkR is an R package that provides a light-weight frontend to use Spark from R.
Installing sparkR
Libraries of sparkR need to be created in $SPARK_HOME/R/lib
. This can be done by running the script $SPARK_HOME/R/install-dev.sh
.
By default the above script uses the system wide installation of R. However, this can be changed to any user installed location of R by setting the environment variable R_HOME
the full path of the base directory where R is installed, before running install-dev.sh script.
Example:
# where /home/username/R is where R is installed and /home/username/R/bin contains the files R and RScript
export R_HOME=/home/username/R
./install-dev.sh
SparkR development
Build Spark
Build Spark with Maven and include the -Psparkr
profile to build the R package. For example to use the default Hadoop versions you can run
./build/mvn -DskipTests -Psparkr package
Running sparkR
You can start using SparkR by launching the SparkR shell with
./bin/sparkR
The sparkR
script automatically creates a SparkContext with Spark by default in
local mode. To specify the Spark master of a cluster for the automatically created
SparkContext, you can run
./bin/sparkR --master "local[2]"
To set other options like driver memory, executor memory etc. you can pass in the spark-submit arguments to ./bin/sparkR
Using SparkR from RStudio
If you wish to use SparkR from RStudio, please refer SparkR documentation.
Making changes to SparkR
The instructions for making contributions to Spark also apply to SparkR.
If you only make R file changes (i.e. no Scala changes) then you can just re-install the R package using R/install-dev.sh
and test your changes.
Once you have made your changes, please include unit tests for them and run existing unit tests using the R/run-tests.sh
script as described below.
Generating documentation
The SparkR documentation (Rd files and HTML files) are not a part of the source repository. To generate them you can run the script R/create-docs.sh
. This script uses devtools
and knitr
to generate the docs and these packages need to be installed on the machine before using the script. Also, you may need to install these prerequisites. See also, R/DOCUMENTATION.md
Examples, Unit tests
SparkR comes with several sample programs in the examples/src/main/r
directory.
To run one of them, use ./bin/spark-submit <filename> <args>
. For example:
./bin/spark-submit examples/src/main/r/dataframe.R
You can run R unit tests by following the instructions under Running R Tests.
Running on YARN
The ./bin/spark-submit
can also be used to submit jobs to YARN clusters. You will need to set YARN conf dir before doing so. For example on CDH you can run
export YARN_CONF_DIR=/etc/hadoop/conf
./bin/spark-submit --master yarn examples/src/main/r/dataframe.R