spark-instrumented-optimizer/python/pyspark/sql
hyukjinkwon 68ea290b3a
[SPARK-13748][PYSPARK][DOC] Add the description for explictly setting None for a named argument for a Row
## What changes were proposed in this pull request?

It seems allowed to not set a key and value for a dict to represent the value is `None` or missing as below:

``` python
spark.createDataFrame([{"x": 1}, {"y": 2}]).show()
```

```
+----+----+
|   x|   y|
+----+----+
|   1|null|
|null|   2|
+----+----+
```

However,  it seems it is not for `Row` as below:

``` python
spark.createDataFrame([Row(x=1), Row(y=2)]).show()
```

``` scala
16/06/19 16:25:56 ERROR Executor: Exception in task 6.0 in stage 66.0 (TID 316)
java.lang.IllegalStateException: Input row doesn't have expected number of values required by the schema. 2 fields are required while 1 values are provided.
    at org.apache.spark.sql.execution.python.EvaluatePython$.fromJava(EvaluatePython.scala:147)
    at org.apache.spark.sql.SparkSession$$anonfun$7.apply(SparkSession.scala:656)
    at org.apache.spark.sql.SparkSession$$anonfun$7.apply(SparkSession.scala:656)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:247)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:240)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:780)
```

The behaviour seems right but it seems it might confuse users just like this JIRA was reported.

This PR adds the explanation for `Row` class.
## How was this patch tested?

N/A

Author: hyukjinkwon <gurwls223@gmail.com>

Closes #13771 from HyukjinKwon/SPARK-13748.
2017-01-07 12:52:41 +00:00
..
__init__.py [SPARK-16772][PYTHON][DOCS] Fix API doc references to UDFRegistration + Update "important classes" 2016-08-06 05:02:59 +01:00
catalog.py [SPARK-18949][SQL] Add recoverPartitions API to Catalog 2016-12-20 23:40:02 -08:00
column.py [SPARK-17215][SQL] Method SQLContext.parseDataType(dataTypeString: String) could be removed. 2016-08-24 23:36:04 -07:00
conf.py [SPARK-15464][ML][MLLIB][SQL][TESTS] Replace SQLContext and SparkContext with SparkSession using builder pattern in python test code 2016-05-23 18:14:48 -07:00
context.py [SPARK-11775][PYSPARK][SQL] Allow PySpark to register Java UDF 2016-10-14 15:50:35 -07:00
dataframe.py [SPARK-18447][DOCS] Fix the markdown for Note:/NOTE:/Note that across Python API documentation 2016-11-22 11:40:18 +00:00
functions.py [SPARK-18447][DOCS] Fix the markdown for Note:/NOTE:/Note that across Python API documentation 2016-11-22 11:40:18 +00:00
group.py [MINOR][PYSPARK][DOC] Fix wrongly formatted examples in PySpark documentation 2016-07-06 10:45:51 -07:00
readwriter.py [SPARK-17764][SQL] Add to_json supporting to convert nested struct column to JSON string 2016-11-01 12:46:41 -07:00
session.py [SPARK-17720][SQL] introduce static SQL conf 2016-10-11 20:27:08 -07:00
streaming.py [SPARK-18888] partitionBy in DataStreamWriter in Python throws _to_seq not defined 2016-12-15 14:26:54 -08:00
tests.py [SPARK-18888] partitionBy in DataStreamWriter in Python throws _to_seq not defined 2016-12-15 14:26:54 -08:00
types.py [SPARK-13748][PYSPARK][DOC] Add the description for explictly setting None for a named argument for a Row 2017-01-07 12:52:41 +00:00
utils.py [MINOR][DOCS] Remove consecutive duplicated words/typo in Spark Repo 2017-01-04 15:07:29 +00:00
window.py [SPARK-18690][PYTHON][SQL] Backward compatibility of unbounded frames 2016-12-02 17:39:28 -08:00