[SPARK-35603][R][DOCS] Add data source options link for R API documentation

### What changes were proposed in this pull request?

There are options for data source are documented at Data Source Options page for every data sources.

For Python, Scala, JAVA, the link for Data Source Option page was added in each API documentation.

- Python
<img width="732" alt="Screen Shot 2021-06-07 at 12 25 45 PM" src="https://user-images.githubusercontent.com/44108233/120955187-cbe38800-c78b-11eb-9475-ccf89bbc3c95.png">

- Scala
<img width="677" alt="Screen Shot 2021-06-07 at 12 26 41 PM" src="https://user-images.githubusercontent.com/44108233/120955186-cab25b00-c78b-11eb-9fed-3f0d2024029b.png">

- JAVA
<img width="726" alt="Screen Shot 2021-06-07 at 12 27 49 PM" src="https://user-images.githubusercontent.com/44108233/120955182-c8e89780-c78b-11eb-9cf1-13e41ba35b3e.png">

However, we have no link for R documentation, so we should add the link to the R documentation as well.

### Why are the changes needed?

To provide users available options for each data source when they read/write it.

### Does this PR introduce _any_ user-facing change?

Yes, the link for Data Source Option is added to R documentation as below.

<img width="855" alt="Screen Shot 2021-06-07 at 12 29 26 PM" src="https://user-images.githubusercontent.com/44108233/120955302-064d2500-c78c-11eb-8dc3-cb22dfd5fd14.png">

### How was this patch tested?

Manually built doc and checked one by one

Closes #32797 from itholic/SPARK-35603.

Lead-authored-by: itholic <haejoon.lee@databricks.com>
Co-authored-by: Hyukjin Kwon <gurwls223@gmail.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
This commit is contained in:
itholic 2021-06-08 11:58:38 +09:00 committed by Hyukjin Kwon
parent 04418e18d7
commit 745756ca4c
3 changed files with 46 additions and 0 deletions

View file

@ -889,6 +889,10 @@ setMethod("toJSON",
#' @param mode one of 'append', 'overwrite', 'error', 'errorifexists', 'ignore'
#' save mode (it is 'error' by default)
#' @param ... additional argument(s) passed to the method.
#' You can find the JSON-specific options for writing JSON files in
#' \url{
#' https://spark.apache.org/docs/latest/sql-data-sources-json.html#data-source-option}{
#' Data Source Option} in the version you use.
#'
#' @family SparkDataFrame functions
#' @rdname write.json
@ -920,6 +924,10 @@ setMethod("write.json",
#' @param mode one of 'append', 'overwrite', 'error', 'errorifexists', 'ignore'
#' save mode (it is 'error' by default)
#' @param ... additional argument(s) passed to the method.
#' You can find the ORC-specific options for writing ORC files in
#' \url{
#' https://spark.apache.org/docs/latest/sql-data-sources-orc.html#data-source-option}{
#' Data Source Option} in the version you use.
#'
#' @family SparkDataFrame functions
#' @aliases write.orc,SparkDataFrame,character-method
@ -951,6 +959,10 @@ setMethod("write.orc",
#' @param mode one of 'append', 'overwrite', 'error', 'errorifexists', 'ignore'
#' save mode (it is 'error' by default)
#' @param ... additional argument(s) passed to the method.
#' You can find the Parquet-specific options for writing Parquet files in
#' \url{
#' https://spark.apache.org/docs/latest/sql-data-sources-parquet.html#data-source-option
#' }{Data Source Option} in the version you use.
#'
#' @family SparkDataFrame functions
#' @rdname write.parquet
@ -983,6 +995,10 @@ setMethod("write.parquet",
#' @param mode one of 'append', 'overwrite', 'error', 'errorifexists', 'ignore'
#' save mode (it is 'error' by default)
#' @param ... additional argument(s) passed to the method.
#' You can find the text-specific options for writing text files in
#' \url{
#' https://spark.apache.org/docs/latest/sql-data-sources-text.html#data-source-option}{
#' Data Source Option} in the version you use.
#'
#' @family SparkDataFrame functions
#' @aliases write.text,SparkDataFrame,character-method
@ -3731,6 +3747,9 @@ setMethod("histogram",
#'
#' Save the content of the SparkDataFrame to an external database table via JDBC. Additional JDBC
#' database connection properties can be set (...)
#' You can find the JDBC-specific option and parameter documentation for writing tables via JDBC in
#' \url{https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html#data-source-option}{
#' Data Source Option} in the version you use.
#'
#' Also, mode is used to specify the behavior of the save operation when
#' data already exists in the data source. There are four modes:

View file

@ -381,6 +381,10 @@ setMethod("toDF", signature(x = "RDD"),
#'
#' @param path Path of file to read. A vector of multiple paths is allowed.
#' @param ... additional external data source specific named properties.
#' You can find the JSON-specific options for reading JSON files in
#' \url{
#' https://spark.apache.org/docs/latest/sql-data-sources-json.html#data-source-option}{
#' Data Source Option} in the version you use.
#' @return SparkDataFrame
#' @rdname read.json
#' @examples
@ -409,6 +413,10 @@ read.json <- function(path, ...) {
#'
#' @param path Path of file to read.
#' @param ... additional external data source specific named properties.
#' You can find the ORC-specific options for reading ORC files in
#' \url{
#' https://spark.apache.org/docs/latest/sql-data-sources-orc.html#data-source-option}{
#' Data Source Option} in the version you use.
#' @return SparkDataFrame
#' @rdname read.orc
#' @name read.orc
@ -430,6 +438,10 @@ read.orc <- function(path, ...) {
#'
#' @param path path of file to read. A vector of multiple paths is allowed.
#' @param ... additional data source specific named properties.
#' You can find the Parquet-specific options for reading Parquet files in
#' \url{
#' https://spark.apache.org/docs/latest/sql-data-sources-parquet.html#data-source-option
#' }{Data Source Option} in the version you use.
#' @return SparkDataFrame
#' @rdname read.parquet
#' @name read.parquet
@ -455,6 +467,10 @@ read.parquet <- function(path, ...) {
#'
#' @param path Path of file to read. A vector of multiple paths is allowed.
#' @param ... additional external data source specific named properties.
#' You can find the text-specific options for reading text files in
#' \url{
#' https://spark.apache.org/docs/latest/sql-data-sources-text.html#data-source-option}{
#' Data Source Option} in the version you use.
#' @return SparkDataFrame
#' @rdname read.text
#' @examples
@ -602,6 +618,9 @@ loadDF <- function(path = NULL, source = NULL, schema = NULL, ...) {
#' Create a SparkDataFrame representing the database table accessible via JDBC URL
#'
#' Additional JDBC database connection properties can be set (...)
#' You can find the JDBC-specific option and parameter documentation for reading tables via JDBC in
#' \url{https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html#data-source-option}{
#' Data Source Option} in the version you use.
#'
#' Only one of partitionColumn or predicates should be set. Partitions of the table will be
#' retrieved in parallel based on the \code{numPartitions} or by the predicates.

View file

@ -258,11 +258,19 @@ NULL
#' \item \code{to_json}, \code{from_json} and \code{schema_of_json}: this contains
#' additional named properties to control how it is converted and accepts the
#' same options as the JSON data source.
#' You can find the JSON-specific options for reading/writing JSON files in
#' \url{
#' https://spark.apache.org/docs/latest/sql-data-sources-json.html#data-source-option}{
#' Data Source Option} in the version you use.
#' \item \code{to_json}: it supports the "pretty" option which enables pretty
#' JSON generation.
#' \item \code{to_csv}, \code{from_csv} and \code{schema_of_csv}: this contains
#' additional named properties to control how it is converted and accepts the
#' same options as the CSV data source.
#' You can find the CSV-specific options for reading/writing CSV files in
#' \url{
#' https://spark.apache.org/docs/latest/sql-data-sources-csv.html#data-source-option}{
#' Data Source Option} in the version you use.
#' \item \code{arrays_zip}, this contains additional Columns of arrays to be merged.
#' \item \code{map_concat}, this contains additional Columns of maps to be unioned.
#' }