spark-instrumented-optimizer/python/docs
Xiangrui Meng 0bfacd5c5d [SPARK-6090][MLLIB] add a basic BinaryClassificationMetrics to PySpark/MLlib
A simple wrapper around the Scala implementation. `DataFrame` is used for serialization/deserialization. Methods that return `RDD`s are not supported in this PR.

davies If we recognize Scala's `Product`s in Py4J, we can easily add wrappers for Scala methods that returns `RDD[(Double, Double)]`. Is it easy to register serializer for `Product` in PySpark?

Author: Xiangrui Meng <meng@databricks.com>

Closes #4863 from mengxr/SPARK-6090 and squashes the following commits:

009a3a3 [Xiangrui Meng] provide schema
dcddab5 [Xiangrui Meng] add a basic BinaryClassificationMetrics to PySpark/MLlib
2015-03-05 11:50:09 -08:00
..
conf.py [SPARK-5944] [PySpark] fix version in Python API docs 2015-02-25 15:13:34 -08:00
epytext.py [SPARK-4439] [MLlib] add python api for random forest 2014-11-20 15:31:28 -08:00
index.rst [SPARK-4586][MLLIB] Python API for ML pipeline and parameters 2015-01-28 17:14:23 -08:00
make.bat [SPARK-3870] EOL character enforcement 2014-10-31 12:39:52 -07:00
make2.bat [SPARK-3870] EOL character enforcement 2014-10-31 12:39:52 -07:00
Makefile [SPARK-3430] [PySpark] [Doc] generate PySpark API docs using Sphinx 2014-09-16 12:51:58 -07:00
pyspark.ml.rst [SPARK-5469] restructure pyspark.sql into multiple files 2015-02-09 20:49:22 -08:00
pyspark.mllib.rst [SPARK-6090][MLLIB] add a basic BinaryClassificationMetrics to PySpark/MLlib 2015-03-05 11:50:09 -08:00
pyspark.rst [SPARK-4586][MLLIB] Python API for ML pipeline and parameters 2015-01-28 17:14:23 -08:00
pyspark.sql.rst [SPARK-5944] [PySpark] fix version in Python API docs 2015-02-25 15:13:34 -08:00
pyspark.streaming.rst [SPARK-6127][Streaming][Docs] Add Kafka to Python api docs 2015-03-02 18:40:46 -08:00