spark-instrumented-optimizer/python/pyspark/sql
JihongMa 09ad9533d5 [SPARK-11720][SQL][ML] Handle edge cases when count = 0 or 1 for Stats function
return Double.NaN for mean/average when count == 0 for all numeric types that is converted to Double, Decimal type continue to return null.

Author: JihongMa <linlin200605@gmail.com>

Closes #9705 from JihongMA/SPARK-11720.
2015-11-18 13:03:37 -08:00
..
__init__.py [SPARK-10373] [PYSPARK] move @since into pyspark from sql 2015-09-08 20:56:22 -07:00
column.py [SPARK-9014] [SQL] Allow Python spark API to use built-in exponential operator 2015-09-11 15:19:04 -07:00
context.py [SPARK-11671] documentation code example typo 2015-11-12 15:42:30 -08:00
dataframe.py [SPARK-11720][SQL][ML] Handle edge cases when count = 0 or 1 for Stats function 2015-11-18 13:03:37 -08:00
functions.py [SPARK-11567] [PYTHON] Add Python API for corr Aggregate function 2015-11-10 15:47:10 -08:00
group.py [SPARK-11690][PYSPARK] Add pivot to python api 2015-11-13 10:31:17 -08:00
readwriter.py [SPARK-11804] [PYSPARK] Exception raise when using Jdbc predicates opt… 2015-11-18 08:18:54 -08:00
tests.py [SPARK-9830][SQL] Remove AggregateExpression1 and Aggregate Operator used to evaluate AggregateExpression1s 2015-11-10 11:06:29 -08:00
types.py [SPARK-11158][SQL] Modified _verify_type() to be more informative on Errors by presenting the Object 2015-10-18 11:39:19 -07:00
utils.py [SPARK-11804] [PYSPARK] Exception raise when using Jdbc predicates opt… 2015-11-18 08:18:54 -08:00
window.py [SPARK-10373] [PYSPARK] move @since into pyspark from sql 2015-09-08 20:56:22 -07:00