3a1bc6f478
## What changes were proposed in this pull request? Due to a limitation of hive metastore(table location must be directory path, not file path), we always store `path` for data source table in storage properties, instead of the `locationUri` field. However, we should not expose this difference to `CatalogTable` level, but just treat it as a hack in `HiveExternalCatalog`, like we store table schema of data source table in table properties. This PR unifies `path` and `locationUri` outside of `HiveExternalCatalog`, both data source table and hive serde table should use the `locationUri` field. This PR also unifies the way we handle default table location for managed table. Previously, the default table location of hive serde managed table is set by external catalog, but the one of data source table is set by command. After this PR, we follow the hive way and the default table location is always set by external catalog. For managed non-file-based tables, we will assign a default table location and create an empty directory for it, the table location will be removed when the table is dropped. This is reasonable as metastore doesn't care about whether a table is file-based or not, and an empty table directory has no harm. For external non-file-based tables, ideally we can omit the table location, but due to a hive metastore issue, we will assign a random location to it, and remove it right after the table is created. See SPARK-15269 for more details. This is fine as it's well isolated in `HiveExternalCatalog`. To keep the existing behaviour of the `path` option, in this PR we always add the `locationUri` to storage properties using key `path`, before passing storage properties to `DataSource` as data source options. ## How was this patch tested? existing tests. Author: Wenchen Fan <wenchen@databricks.com> Closes #15024 from cloud-fan/path. |
||
---|---|---|
.. | ||
pkg | ||
.gitignore | ||
check-cran.sh | ||
create-docs.sh | ||
DOCUMENTATION.md | ||
install-dev.bat | ||
install-dev.sh | ||
log4j.properties | ||
README.md | ||
run-tests.sh | ||
WINDOWS.md |
R on Spark
SparkR is an R package that provides a light-weight frontend to use Spark from R.
Installing sparkR
Libraries of sparkR need to be created in $SPARK_HOME/R/lib
. This can be done by running the script $SPARK_HOME/R/install-dev.sh
.
By default the above script uses the system wide installation of R. However, this can be changed to any user installed location of R by setting the environment variable R_HOME
the full path of the base directory where R is installed, before running install-dev.sh script.
Example:
# where /home/username/R is where R is installed and /home/username/R/bin contains the files R and RScript
export R_HOME=/home/username/R
./install-dev.sh
SparkR development
Build Spark
Build Spark with Maven and include the -Psparkr
profile to build the R package. For example to use the default Hadoop versions you can run
build/mvn -DskipTests -Psparkr package
Running sparkR
You can start using SparkR by launching the SparkR shell with
./bin/sparkR
The sparkR
script automatically creates a SparkContext with Spark by default in
local mode. To specify the Spark master of a cluster for the automatically created
SparkContext, you can run
./bin/sparkR --master "local[2]"
To set other options like driver memory, executor memory etc. you can pass in the spark-submit arguments to ./bin/sparkR
Using SparkR from RStudio
If you wish to use SparkR from RStudio or other R frontends you will need to set some environment variables which point SparkR to your Spark installation. For example
# Set this to where Spark is installed
Sys.setenv(SPARK_HOME="/Users/username/spark")
# This line loads SparkR from the installed directory
.libPaths(c(file.path(Sys.getenv("SPARK_HOME"), "R", "lib"), .libPaths()))
library(SparkR)
sc <- sparkR.init(master="local")
Making changes to SparkR
The instructions for making contributions to Spark also apply to SparkR.
If you only make R file changes (i.e. no Scala changes) then you can just re-install the R package using R/install-dev.sh
and test your changes.
Once you have made your changes, please include unit tests for them and run existing unit tests using the R/run-tests.sh
script as described below.
Generating documentation
The SparkR documentation (Rd files and HTML files) are not a part of the source repository. To generate them you can run the script R/create-docs.sh
. This script uses devtools
and knitr
to generate the docs and these packages need to be installed on the machine before using the script. Also, you may need to install these prerequisites. See also, R/DOCUMENTATION.md
Examples, Unit tests
SparkR comes with several sample programs in the examples/src/main/r
directory.
To run one of them, use ./bin/spark-submit <filename> <args>
. For example:
./bin/spark-submit examples/src/main/r/dataframe.R
You can also run the unit tests for SparkR by running. You need to install the testthat package first:
R -e 'install.packages("testthat", repos="http://cran.us.r-project.org")'
./R/run-tests.sh
Running on YARN
The ./bin/spark-submit
can also be used to submit jobs to YARN clusters. You will need to set YARN conf dir before doing so. For example on CDH you can run
export YARN_CONF_DIR=/etc/hadoop/conf
./bin/spark-submit --master yarn examples/src/main/r/dataframe.R