..
ml
[SPARK-18630][PYTHON][ML] Move del method from JavaParams to JavaWrapper; add tests
2018-03-05 15:53:10 -08:00
mllib
[SPARK-22399][ML] update the location of reference paper
2017-10-31 08:20:23 +00:00
sql
[SPARK-23329][SQL] Fix documentation of trigonometric functions
2018-03-05 23:46:40 +09:00
streaming
[SPARK-23417][PYTHON] Fix the build instructions supplied by exception messages in python streaming tests
2018-02-28 09:25:02 +09:00
__init__.py
[SPARK-23328][PYTHON] Disallow default value None in na.replace/replace when 'to_replace' is not a dictionary
2018-02-09 14:21:10 +08:00
_globals.py
[SPARK-23328][PYTHON] Disallow default value None in na.replace/replace when 'to_replace' is not a dictionary
2018-02-09 14:21:10 +08:00
accumulators.py
[SPARK-8652] [PYSPARK] Check return value for all uses of doctest.testmod()
2015-06-26 08:12:22 -07:00
broadcast.py
[SPARK-12717][PYTHON] Adding thread-safe broadcast pickle registry
2017-08-02 07:12:23 +09:00
cloudpickle.py
[SPARK-21070][PYSPARK] Attempt to update cloudpickle again
2017-08-22 11:17:53 +09:00
conf.py
[SPARK-18447][DOCS] Fix the markdown for Note:
/NOTE:
/Note that
across Python API documentation
2016-11-22 11:40:18 +00:00
context.py
[SPARK-20791][PYSPARK] Use Arrow to create Spark DataFrame from Pandas
2017-11-13 13:16:01 +09:00
daemon.py
[SPARK-4897] [PySpark] Python 3 support
2015-04-16 16:20:57 -07:00
files.py
[SPARK-3309] [PySpark] Put all public API in __all__
2014-09-03 11:49:45 -07:00
find_spark_home.py
[SPARK-1267][SPARK-18129] Allow PySpark to be pip installed
2016-11-16 14:22:15 -08:00
heapq3.py
[SPARK-8652] [PYSPARK] Check return value for all uses of doctest.testmod()
2015-06-26 08:12:22 -07:00
java_gateway.py
[SPARK-20791][PYSPARK] Use Arrow to create Spark DataFrame from Pandas
2017-11-13 13:16:01 +09:00
join.py
[SPARK-14202] [PYTHON] Use generator expression instead of list comp in python_full_outer_jo…
2016-03-28 14:51:36 -07:00
profiler.py
[SPARK-8652] [PYSPARK] Check return value for all uses of doctest.testmod()
2015-06-26 08:12:22 -07:00
rdd.py
[SPARK-23261][PYSPARK] Rename Pandas UDFs
2018-01-30 21:55:55 +09:00
rddsampler.py
[SPARK-4897] [PySpark] Python 3 support
2015-04-16 16:20:57 -07:00
resultiterable.py
[SPARK-3074] [PySpark] support groupByKey() with single huge key
2015-04-09 17:07:23 -07:00
serializers.py
[SPARK-23334][SQL][PYTHON] Fix pandas_udf with return type StringType() to handle str type properly in Python 2.
2018-02-06 18:30:50 +09:00
shell.py
[SPARK-19570][PYSPARK] Allow to disable hive in pyspark shell
2017-04-12 10:54:50 -07:00
shuffle.py
[SPARK-10710] Remove ability to disable spilling in core and SQL
2015-09-19 21:40:21 -07:00
statcounter.py
[SPARK-6919] [PYSPARK] Add asDict method to StatCounter
2015-09-29 13:38:15 -07:00
status.py
[SPARK-4172] [PySpark] Progress API in Python
2015-02-17 13:36:43 -08:00
storagelevel.py
[SPARK-13992][CORE][PYSPARK][FOLLOWUP] Update OFF_HEAP semantics for Java api and Python api
2016-04-12 23:06:55 -07:00
taskcontext.py
[SPARK-18576][PYTHON] Add basic TaskContext information to PySpark
2016-12-20 15:51:21 -08:00
tests.py
[SPARK-23517][PYTHON] Make pyspark.util._exception_message
produce the trace from Java side by Py4JJavaError
2018-03-01 00:44:13 +09:00
traceback_utils.py
[SPARK-1087] Move python traceback utilities into new traceback_utils.py file.
2014-09-15 19:28:17 -07:00
util.py
[SPARK-23517][PYTHON] Make pyspark.util._exception_message
produce the trace from Java side by Py4JJavaError
2018-03-01 00:44:13 +09:00
version.py
[SPARK-23028] Bump master branch version to 2.4.0-SNAPSHOT
2018-01-13 00:37:59 +08:00
worker.py
[SPARK-23352][PYTHON] Explicitly specify supported types in Pandas UDFs
2018-02-12 20:49:36 +09:00