spark-instrumented-optimizer/python/pyspark/sql
Reynold Xin f315272279 [SPARK-11946][SQL] Audit pivot API for 1.6.
Currently pivot's signature looks like

```scala
scala.annotation.varargs
def pivot(pivotColumn: Column, values: Column*): GroupedData

scala.annotation.varargs
def pivot(pivotColumn: String, values: Any*): GroupedData
```

I think we can remove the one that takes "Column" types, since callers should always be passing in literals. It'd also be more clear if the values are not varargs, but rather Seq or java.util.List.

I also made similar changes for Python.

Author: Reynold Xin <rxin@databricks.com>

Closes #9929 from rxin/SPARK-11946.
2015-11-24 12:54:37 -08:00
..
__init__.py [SPARK-10373] [PYSPARK] move @since into pyspark from sql 2015-09-08 20:56:22 -07:00
column.py [SPARK-11836][SQL] udf/cast should not create new SQLContext 2015-11-23 13:44:30 -08:00
context.py [SPARK-11671] documentation code example typo 2015-11-12 15:42:30 -08:00
dataframe.py [SPARK-11720][SQL][ML] Handle edge cases when count = 0 or 1 for Stats function 2015-11-18 13:03:37 -08:00
functions.py [SPARK-11836][SQL] udf/cast should not create new SQLContext 2015-11-23 13:44:30 -08:00
group.py [SPARK-11946][SQL] Audit pivot API for 1.6. 2015-11-24 12:54:37 -08:00
readwriter.py [SPARK-11804] [PYSPARK] Exception raise when using Jdbc predicates opt… 2015-11-18 08:18:54 -08:00
tests.py [SPARK-9830][SQL] Remove AggregateExpression1 and Aggregate Operator used to evaluate AggregateExpression1s 2015-11-10 11:06:29 -08:00
types.py [SPARK-11158][SQL] Modified _verify_type() to be more informative on Errors by presenting the Object 2015-10-18 11:39:19 -07:00
utils.py [SPARK-11804] [PYSPARK] Exception raise when using Jdbc predicates opt… 2015-11-18 08:18:54 -08:00
window.py [SPARK-10373] [PYSPARK] move @since into pyspark from sql 2015-09-08 20:56:22 -07:00