[SPARK-35395][DOCS] Move ORC data source options from Python and Scala into a single page

### What changes were proposed in this pull request?

This PR proposes move ORC data source options from Python, Scala and Java into a single page.

### Why are the changes needed?

So far, the documentation for ORC data source options is separated into different pages for each language API documents. However, this makes managing many options inconvenient, so it is efficient to manage all options in a single page and provide a link to that page in the API of each language.

### Does this PR introduce _any_ user-facing change?

Yes, the documents will be shown below after this change:

- "ORC Files" page
![Screen Shot 2021-05-21 at 2 07 14 PM](https://user-images.githubusercontent.com/44108233/119085078-f4564d00-ba3d-11eb-8990-3ba031d809da.png)

- Python
![Screen Shot 2021-05-21 at 2 06 46 PM](https://user-images.githubusercontent.com/44108233/119085097-00daa580-ba3e-11eb-8017-ac5a95a7c053.png)

- Scala
![Screen Shot 2021-05-21 at 2 06 09 PM](https://user-images.githubusercontent.com/44108233/119085135-164fcf80-ba3e-11eb-9cac-78dded523f38.png)

- Java
![Screen Shot 2021-05-21 at 2 06 30 PM](https://user-images.githubusercontent.com/44108233/119085125-118b1b80-ba3e-11eb-9434-f26612d7da13.png)

### How was this patch tested?

Manually build docs and confirm the page.

Closes #32546 from itholic/SPARK-35395.

Authored-by: itholic <haejoon.lee@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
This commit is contained in:
itholic 2021-05-21 18:03:57 +09:00 committed by Hyukjin Kwon
parent 34284c0649
commit 0fe65b5365
6 changed files with 59 additions and 75 deletions

View file

@ -172,3 +172,29 @@ When reading from Hive metastore ORC tables and inserting to Hive metastore ORC
<td>2.0.0</td> <td>2.0.0</td>
</tr> </tr>
</table> </table>
## Data Source Option
Data source options of ORC can be set via:
* the `.option`/`.options` methods of
* `DataFrameReader`
* `DataFrameWriter`
* `DataStreamReader`
* `DataStreamWriter`
<table class="table">
<tr><th><b>Property Name</b></th><th><b>Default</b></th><th><b>Meaning</b></th><th><b>Scope</b></th></tr>
<tr>
<td><code>mergeSchema</code></td>
<td>None</td>
<td>sets whether we should merge schemas collected from all ORC part-files. This will override <code>spark.sql.orc.mergeSchema</code>. The default value is specified in <code>spark.sql.orc.mergeSchema</code>.</td>
<td>read</td>
</tr>
<tr>
<td><code>compression</code></td>
<td>None</td>
<td>compression codec to use when saving to file. This can be one of the known case-insensitive shorten names (none, snappy, zlib, lzo, and zstd). This will override <code>orc.compress</code> and <code>spark.sql.orc.compression.codec</code>. If None is set, it uses the value specified in <code>spark.sql.orc.compression.codec</code>.</td>
<td>write</td>
</tr>
</table>
Other generic options can be found in <a href="https://spark.apache.org/docs/latest/sql-data-sources-generic-options.html"> Generic File Source Options</a>.

View file

@ -793,28 +793,13 @@ class DataFrameReader(OptionUtils):
Parameters Parameters
---------- ----------
path : str or list path : str or list
mergeSchema : str or bool, optional
sets whether we should merge schemas collected from all
ORC part-files. This will override ``spark.sql.orc.mergeSchema``.
The default value is specified in ``spark.sql.orc.mergeSchema``.
pathGlobFilter : str or bool
an optional glob pattern to only include files with paths matching
the pattern. The syntax follows `org.apache.hadoop.fs.GlobFilter`.
It does not change the behavior of
`partition discovery <https://spark.apache.org/docs/latest/sql-data-sources-parquet.html#partition-discovery>`_. # noqa
recursiveFileLookup : str or bool
recursively scan a directory for files. Using this option
disables
`partition discovery <https://spark.apache.org/docs/latest/sql-data-sources-parquet.html#partition-discovery>`_. # noqa
modification times occurring before the specified time. The provided timestamp Other Parameters
must be in the following format: YYYY-MM-DDTHH:mm:ss (e.g. 2020-06-01T13:00:00) ----------------
modifiedBefore : an optional timestamp to only include files with Extra options
modification times occurring before the specified time. The provided timestamp For the extra options, refer to
must be in the following format: YYYY-MM-DDTHH:mm:ss (e.g. 2020-06-01T13:00:00) `Data Source Option <https://spark.apache.org/docs/latest/sql-data-sources-orc.html#data-source-option>`_ # noqa
modifiedAfter : an optional timestamp to only include files with in the version you use.
modification times occurring after the specified time. The provided timestamp
must be in the following format: YYYY-MM-DDTHH:mm:ss (e.g. 2020-06-01T13:00:00)
Examples Examples
-------- --------
@ -1417,12 +1402,13 @@ class DataFrameWriter(OptionUtils):
exists. exists.
partitionBy : str or list, optional partitionBy : str or list, optional
names of partitioning columns names of partitioning columns
compression : str, optional
compression codec to use when saving to file. This can be one of the Other Parameters
known case-insensitive shorten names (none, snappy, zlib, lzo, and zstd). ----------------
This will override ``orc.compress`` and Extra options
``spark.sql.orc.compression.codec``. If None is set, it uses the value For the extra options, refer to
specified in ``spark.sql.orc.compression.codec``. `Data Source Option <https://spark.apache.org/docs/latest/sql-data-sources-orc.html#data-source-option>`_ # noqa
in the version you use.
Examples Examples
-------- --------

View file

@ -637,20 +637,12 @@ class DataStreamReader(OptionUtils):
.. versionadded:: 2.3.0 .. versionadded:: 2.3.0
Parameters Other Parameters
---------- ----------------
mergeSchema : str or bool, optional Extra options
sets whether we should merge schemas collected from all For the extra options, refer to
ORC part-files. This will override ``spark.sql.orc.mergeSchema``. `Data Source Option <https://spark.apache.org/docs/latest/sql-data-sources-orc.html#data-source-option>`_ # noqa
The default value is specified in ``spark.sql.orc.mergeSchema``. in the version you use.
pathGlobFilter : str or bool, optional
an optional glob pattern to only include files with paths matching
the pattern. The syntax follows `org.apache.hadoop.fs.GlobFilter`.
It does not change the behavior of `partition discovery`_.
recursiveFileLookup : str or bool, optional
recursively scan a directory for files. Using this option
disables
`partition discovery <https://spark.apache.org/docs/latest/sql-data-sources-parquet.html#partition-discovery>`_. # noqa
Examples Examples
-------- --------

View file

@ -874,23 +874,10 @@ class DataFrameReader private[sql](sparkSession: SparkSession) extends Logging {
/** /**
* Loads ORC files and returns the result as a `DataFrame`. * Loads ORC files and returns the result as a `DataFrame`.
* *
* You can set the following ORC-specific option(s) for reading ORC files: * ORC-specific option(s) for reading ORC files can be found in
* <ul> * <a href=
* <li>`mergeSchema` (default is the value specified in `spark.sql.orc.mergeSchema`): sets whether * "https://spark.apache.org/docs/latest/sql-data-sources-orc.html#data-source-option">
* we should merge schemas collected from all ORC part-files. This will override * Data Source Option</a> in the version you use.
* `spark.sql.orc.mergeSchema`.</li>
* <li>`pathGlobFilter`: an optional glob pattern to only include files with paths matching
* the pattern. The syntax follows <code>org.apache.hadoop.fs.GlobFilter</code>.
* It does not change the behavior of partition discovery.</li>
* <li>`modifiedBefore` (batch only): an optional timestamp to only include files with
* modification times occurring before the specified Time. The provided timestamp
* must be in the following form: YYYY-MM-DDTHH:mm:ss (e.g. 2020-06-01T13:00:00)</li>
* <li>`modifiedAfter` (batch only): an optional timestamp to only include files with
* modification times occurring after the specified Time. The provided timestamp
* must be in the following form: YYYY-MM-DDTHH:mm:ss (e.g. 2020-06-01T13:00:00)</li>
* <li>`recursiveFileLookup`: recursively scan a directory for files. Using this option
* disables partition discovery</li>
* </ul>
* *
* @param paths input paths * @param paths input paths
* @since 2.0.0 * @since 2.0.0

View file

@ -881,14 +881,10 @@ final class DataFrameWriter[T] private[sql](ds: Dataset[T]) {
* format("orc").save(path) * format("orc").save(path)
* }}} * }}}
* *
* You can set the following ORC-specific option(s) for writing ORC files: * ORC-specific option(s) for writing ORC files can be found in
* <ul> * <a href=
* <li>`compression` (default is the value specified in `spark.sql.orc.compression.codec`): * "https://spark.apache.org/docs/latest/sql-data-sources-orc.html#data-source-option">
* compression codec to use when saving to file. This can be one of the known case-insensitive * Data Source Option</a> in the version you use.
* shorten names(`none`, `snappy`, `zlib`, `lzo`, and `zstd`). This will override
* `orc.compress` and `spark.sql.orc.compression.codec`. If `orc.compress` is given,
* it overrides `spark.sql.orc.compression.codec`.</li>
* </ul>
* *
* @since 1.5.0 * @since 1.5.0
*/ */

View file

@ -453,20 +453,17 @@ final class DataStreamReader private[sql](sparkSession: SparkSession) extends Lo
/** /**
* Loads a ORC file stream, returning the result as a `DataFrame`. * Loads a ORC file stream, returning the result as a `DataFrame`.
* *
* You can set the following ORC-specific option(s) for reading ORC files: * You can set the following option(s):
* <ul> * <ul>
* <li>`maxFilesPerTrigger` (default: no max limit): sets the maximum number of new files to be * <li>`maxFilesPerTrigger` (default: no max limit): sets the maximum number of new files to be
* considered in every trigger.</li> * considered in every trigger.</li>
* <li>`mergeSchema` (default is the value specified in `spark.sql.orc.mergeSchema`): sets whether
* we should merge schemas collected from all ORC part-files. This will override
* `spark.sql.orc.mergeSchema`.</li>
* <li>`pathGlobFilter`: an optional glob pattern to only include files with paths matching
* the pattern. The syntax follows <code>org.apache.hadoop.fs.GlobFilter</code>.
* It does not change the behavior of partition discovery.</li>
* <li>`recursiveFileLookup`: recursively scan a directory for files. Using this option
* disables partition discovery</li>
* </ul> * </ul>
* *
* ORC-specific option(s) for reading ORC file stream can be found in
* <a href=
* "https://spark.apache.org/docs/latest/sql-data-sources-orc.html#data-source-option">
* Data Source Option</a> in the version you use.
*
* @since 2.3.0 * @since 2.3.0
*/ */
def orc(path: String): DataFrame = { def orc(path: String): DataFrame = {