[SPARK-7084] improve saveAsTable documentation
Author: madhukar <phatak.dev@gmail.com>
Closes #5654 from phatak-dev/master and squashes the following commits:
386f407 [madhukar] #5654 updated for all the methods
2c997c5 [madhukar] Merge branch 'master' of https://github.com/apache/spark
00bc819 [madhukar] Merge branch 'master' of https://github.com/apache/spark
2a802c6 [madhukar] #5654 updated the doc according to comments
866e8df [madhukar] [SPARK-7084] improve saveAsTable documentation
(cherry picked from commit 57255dcd79
)
Signed-off-by: Reynold Xin <rxin@databricks.com>
This commit is contained in:
parent
0ff34f804f
commit
0dbfe16814
|
@ -1192,6 +1192,9 @@ class DataFrame private[sql](
|
||||||
* there is no notion of a persisted catalog in a standard SQL context. Instead you can write
|
* there is no notion of a persisted catalog in a standard SQL context. Instead you can write
|
||||||
* an RDD out to a parquet file, and then register that file as a table. This "table" can then
|
* an RDD out to a parquet file, and then register that file as a table. This "table" can then
|
||||||
* be the target of an `insertInto`.
|
* be the target of an `insertInto`.
|
||||||
|
*
|
||||||
|
* Also note that while this function can persist the table metadata into Hive's metastore,
|
||||||
|
* the table will NOT be accessible from Hive.
|
||||||
* @group output
|
* @group output
|
||||||
*/
|
*/
|
||||||
@Experimental
|
@Experimental
|
||||||
|
@ -1208,6 +1211,9 @@ class DataFrame private[sql](
|
||||||
* there is no notion of a persisted catalog in a standard SQL context. Instead you can write
|
* there is no notion of a persisted catalog in a standard SQL context. Instead you can write
|
||||||
* an RDD out to a parquet file, and then register that file as a table. This "table" can then
|
* an RDD out to a parquet file, and then register that file as a table. This "table" can then
|
||||||
* be the target of an `insertInto`.
|
* be the target of an `insertInto`.
|
||||||
|
*
|
||||||
|
* Also note that while this function can persist the table metadata into Hive's metastore,
|
||||||
|
* the table will NOT be accessible from Hive.
|
||||||
* @group output
|
* @group output
|
||||||
*/
|
*/
|
||||||
@Experimental
|
@Experimental
|
||||||
|
@ -1232,6 +1238,9 @@ class DataFrame private[sql](
|
||||||
* there is no notion of a persisted catalog in a standard SQL context. Instead you can write
|
* there is no notion of a persisted catalog in a standard SQL context. Instead you can write
|
||||||
* an RDD out to a parquet file, and then register that file as a table. This "table" can then
|
* an RDD out to a parquet file, and then register that file as a table. This "table" can then
|
||||||
* be the target of an `insertInto`.
|
* be the target of an `insertInto`.
|
||||||
|
*
|
||||||
|
* Also note that while this function can persist the table metadata into Hive's metastore,
|
||||||
|
* the table will NOT be accessible from Hive.
|
||||||
* @group output
|
* @group output
|
||||||
*/
|
*/
|
||||||
@Experimental
|
@Experimental
|
||||||
|
@ -1248,6 +1257,9 @@ class DataFrame private[sql](
|
||||||
* there is no notion of a persisted catalog in a standard SQL context. Instead you can write
|
* there is no notion of a persisted catalog in a standard SQL context. Instead you can write
|
||||||
* an RDD out to a parquet file, and then register that file as a table. This "table" can then
|
* an RDD out to a parquet file, and then register that file as a table. This "table" can then
|
||||||
* be the target of an `insertInto`.
|
* be the target of an `insertInto`.
|
||||||
|
*
|
||||||
|
* Also note that while this function can persist the table metadata into Hive's metastore,
|
||||||
|
* the table will NOT be accessible from Hive.
|
||||||
* @group output
|
* @group output
|
||||||
*/
|
*/
|
||||||
@Experimental
|
@Experimental
|
||||||
|
@ -1264,6 +1276,9 @@ class DataFrame private[sql](
|
||||||
* there is no notion of a persisted catalog in a standard SQL context. Instead you can write
|
* there is no notion of a persisted catalog in a standard SQL context. Instead you can write
|
||||||
* an RDD out to a parquet file, and then register that file as a table. This "table" can then
|
* an RDD out to a parquet file, and then register that file as a table. This "table" can then
|
||||||
* be the target of an `insertInto`.
|
* be the target of an `insertInto`.
|
||||||
|
*
|
||||||
|
* Also note that while this function can persist the table metadata into Hive's metastore,
|
||||||
|
* the table will NOT be accessible from Hive.
|
||||||
* @group output
|
* @group output
|
||||||
*/
|
*/
|
||||||
@Experimental
|
@Experimental
|
||||||
|
@ -1285,6 +1300,9 @@ class DataFrame private[sql](
|
||||||
* there is no notion of a persisted catalog in a standard SQL context. Instead you can write
|
* there is no notion of a persisted catalog in a standard SQL context. Instead you can write
|
||||||
* an RDD out to a parquet file, and then register that file as a table. This "table" can then
|
* an RDD out to a parquet file, and then register that file as a table. This "table" can then
|
||||||
* be the target of an `insertInto`.
|
* be the target of an `insertInto`.
|
||||||
|
*
|
||||||
|
* Also note that while this function can persist the table metadata into Hive's metastore,
|
||||||
|
* the table will NOT be accessible from Hive.
|
||||||
* @group output
|
* @group output
|
||||||
*/
|
*/
|
||||||
@Experimental
|
@Experimental
|
||||||
|
|
Loading…
Reference in a new issue